TOKYO -- China is increasing its competitive edge in advanced technologies to combat global warming, a Nikkei survey shows, taking the global lead in patents related to the capture and sequestration of industrial carbon dioxide emissions.

China's lead in the area is three times as large as that of second-place U.S. China is also the global market leader in batteries for electric vehicles and solar panels, giving it growing dominance in the decarbonization supply chain.

The Israeli military said Ron Benjamin was found alongside the bodies of three other hostages.
Toronto police are searching for a man following a stabbing incident early Saturday morning in the downtown core. Police say a group of men were gathered in the area of Yonge Street and Dundas Street East just after 2 a.m. when an altercation broke out between two of them. “During the altercation, the suspect slashed […]
(Image credit: YouTube - Eric Parker)

YouTuber Eric Parker demonstrated in a recent video how dangerous it is to connect classic Windows operating systems, such as Windows XP, to the internet in 2024 without any form of security (including firewalls or routers). The YouTuber set up a Windows XP virtual machine with an utterly unsecured internet connection to see how many viruses it would attract. Within minutes, the OS was already under attack from several viruses.

It might seem silly to hook a PC up to the internet purposefully without using any security. However, in the early 2000s, catching a PC directly to the internet without a router was normal. Granted, Windows XP has a built-in firewall, and most people used anti-virus software at the time. Still, running in a completely unprotected state (intentionally or accidentally) was much easier than newer operating systems. On top of this, running Windows XP unsecured in 2024 is even more dangerous since the operating system no longer receives security updates, making it very easy for hackers to get into the OS.

Two minutes after hooking up his Windows XP virtual machine to the internet, Eric Parker found a couple of viruses that randomly installed themselves on the machine, including a virus dubbed "conhoz.exe." Soon afterward, another virus automatically created a brand new Windows XP account dubbed "admina" that apparently was hosting an FTP file server on the machine.

It didn't take long for many other trojans, viruses, and malware to appear on the system. Eventually, Eric Parker installed Malwarebytes on the XP machine to see how many viruses it would catch. It caught eight viruses classified as trojans, backdoors, DNS changers, and adware. There were still more viruses on the machine, but the free version of Malwarebytes Eric Parker used was only able to catch eight of them.

Windows 2000 suffers a similar fate

Eric Parker also did the same experiment in Windows 2000 and discovered even worse effects on the older OS. Within minutes of exposing the OS to the internet (and ensuring all ports are open, including ports for SMB), a virus installed itself on the computer and automatically shut down the virtual machine. After restarting the VM, more viruses appeared, eventually causing the operating system to blue screen.

These two demonstrations represent a worst-case scenario for both operating systems. Without the most basic security measures, online hackers can use tools such as nmap to detect the specific operating system version a vulnerability system is running and able to freely download and run viruses and malware directly on the system once they know it is a system that is vulnerable.

This sort of severe security vulnerability does not exist in modern operating systems. Windows 10 and Windows 11 , for example, have far more robust security measures that prevent malware from simply installing itself, even if the firewall is turned off. Eric Parker confirmed that Microsoft operating systems dating back to Windows 7 are unaffected by the previously demonstrated security vulnerabilities. He ran Windows 7 for hours without an anti-virus or firewall on another VM and did not detect any viruses on the system.

Stay On the Cutting Edge: Get the Tom's Hardware Newsletter

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

The two were racing to climb the 14 tallest mountains in the world when they were killed in an avalanche.
Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust .

Why it matters: If you are fed up with the noisy co-worker who won't stop hammering at their keyboard or that wailing baby on your flight, a headphone prototype from researchers at the University of Washington could be the upgrade you've been waiting for. Instead of simply dampening ambient sounds, this device uses AI to let you selectively filter out noises and focus on specific voices.

Most active noise-canceling headphones work by producing sound waves to counteract lower-frequency environmental rumbles like engine drones. But they end up canceling all sound in those frequencies, potentially removing audio you want to hear. The new prototype aims to give users more nuanced noise control.

The headphones have built-in microphones feeding audio to a neural network trained to recognize different types of sounds – barking dogs, ringing phones, bird calls, and more. Using a companion app, you can enable or disable categories, allowing just the noises you want to filter through the headphones.

But the really cool part is that the headphones can also zero in on a particular voice amid background chatter. Just tap a button and it will "enroll" the voice directly in front of you as the only sound to be amplified, dampening all other noise.

Shyam Gollakota, who developed the technology with a team of researchers, presented the idea on May 16 at a conference held by the Acoustical Society of America and the Canadian Acoustical Association. He demoed a working prototype at the event, as reported by New Scientist .

Under the hood, the microphones pipe audio into an AI processor that deciphers and removes unwanted sounds in real time. This is done with just an 8-millisecond delay, which the researchers say should be enough to avoid weird latency hiccups. For the on-device AI processing, the current headphone rig uses an OrangePi board connected via USB rather than going through a cloud server.

Of course, this prototype isn't something you can buy just yet. Commercialization would likely require everything shrinking down to a tiny chip that could integrate into future wireless headphone designs.

That said, AI is already making its way into mainstream audio gear through algorithm-powered noise cancellation for microphones. But this flips that concept on its head, using AI to augment what the wearer can hear, not just what the mic picks up. Any device with a decent AI accelerator and mic input could theoretically offer this kind of selective noise muting.

I'll admit that this innovative tech triggered a mildly creepy Black Mirror thought – about the "White Christmas" episode where a woman uses implants to literally "block" out and mute her partner from her senses. If noise cancellation of this sort can help enhance voices, could it also help mute certain voices? Let's not get too carried away with sci-fi scenarios just yet.

Share this article:

Tech Jobs: Find the next step in your career

Featured on TechSpot

People gathered at the East Preston Recreation Centre this week to mark the 105th birthday of Liza Brooks. She was born on May 15, 1919.

The Linux Network File System (NFS) server code (NFSD) is seeing a new Netlink protocol introduced in Linux 6.10 as part of laying the groundwork for the new "nfsdctl" utility.

NFSD with Linux 6.10 is adding a new Netlink protocol intended for handling configuration of the NFSD server. The nfsdctl utility is being added to the nfs-utils user-space utilities for leveraging this new protocol.

Oracle's Chuck Lever comments in the pull request :
"One notable new feature in v6.10 NFSD is the addition of a new netlink protocol dedicated to configuring NFSD. A new user space tool, nfsdctl, is to be added to nfs-utils. Lots more to come here."

The nfsdctl utility can currently being used to get/set the listener info, the NFS versions, thread settings, RPC processing info, and for starting the NFS server. Expect more nfsdctl functionality as time moves on. The nfsdctl utility was inspired by the likes of NetworkManager's nmcli and libvirt's virsh.

The intent is for nfsdctl to eventually replace the "rpc.nfsd" utility. The nfsdctl utility has been concurrently getting into shape for the nfs-utils package integration.

In addition to the new NFSD Netlink protocol, the Linux 6.10 pull request also brings optimizations, code clean-ups, and fixes.

Today LLano released this unique new Sony battery charger you can reorder on Amazon (Click here) .

Key Features:

  • Fast Charging: This USB C charger supports 18W output with an adaptive type c fast charger, enabling your camera battery from zero to 100% in less than 1.5 hours, 50% faster than standard chargers. (90% of the camera battery chargers on the market are Micro USB charger 5V. )
  • Special Design: Unique camera look appearance (which other brands don’t have).
  • Smart LED Display: Clearly shows the charging status of all the batteries, you can identify if the batteries are charging or fully charged at a glance
  • Reliable Safety: Provides complete protection (including temperature control, over-voltage, short-circuit, overcharge, and surge protection) for you and your devices.

**This post contains affiliate links and I will be compensated if you make a purchase after clicking through my links. As an Amazon Associate I earn from qualifying purchases

**This post contains affiliate links to eBay and I will be compensated if you make a purchase after clicking through my links. As an eBay Associate I earn from qualifying purchases

AYANEO Air 1S announced

The Chinese company unveils its premium lightweight Windows 11 gaming handheld.

AYANEO is updating its 2023 1S design with a new APU version. The 1S is a second-generation product in the AIR series, which was originally powered by Ryzen 5000U APUs (Zen 3). Last year, the device was upgraded to the Ryzen 7 7840U, a high-end, low-power Phoenix processor with 8 Zen 4 cores.

Today, AYANEO is announcing another upgrade, this time to the Ryzen 7 8840U, powered by nearly the same architecture. The CPU Zen4 cores remain unchanged, as do the built-in Radeon 780M RDNA3 graphics. However, there is an update to the AMD XDNA AI accelerator, which is now up to 60% faster than the 7000U series (now up to 16 TOPS). This is not a significant upgrade for a gaming handheld, though.

AIR 1S features, Source: AYANEO

The system weighs just 450 grams, mainly due to the smaller 5.5-inch screen. This makes it heavier than its predecessor (Ryzen 5000U) which weighed only 398 grams. AYANEO put significant marketing effort into highlighting that their device is lighter than the Switch OLED, but in this case, the 1S will be heavier.

AIR 1S features, Source: AYANEO

It is clearly not a competitor to high-end handhelds like the ROG Ally or MSI Claw, which come with 7-inch screens, nor is it trying to be. However, the feature these premium devices lack is the AMOLED screen. AYANEO confirms it is an ultra-clear wide color gamut screen with a 1920×1080 resolution.

The 7840U version from last year was also available in an Ultra Thin design with a smaller battery and thinner chassis. There is no mention of such a variant for the 8840U from what we have learned.

AIR 1S features, Source: AYANEO

AYANEO is introducing a special edition called AYANEO AIR 1S x Eiyuden Chronicle, which will be limited to 100 units.

Handheld Gaming Consoles
VideoCardz AYANEO 1S AYANEO 1S
Ultra Thin
ASUS ROG Ally Valve Steam Deck
Picture
Architecture AMD Zen4 & RDNA3 AMD Zen4 & RDNA3 AMD Zen4 & RDNA3 AMD Zen2 & RDNA2
APU Ryzen 7 8840U 🆕
Ryzen 7 7840U
8C/16T up to 5.1 GHz
Ryzen 7 7840U
8C/16T up to 5.1 GHz
Ryzen Z1 Extreme
8C/16T up to 5.1 GHz
Ryzen Z1
6C/12T up to 4.9 GHz
AMD Van Gogh
4C/8T up to 3.5 GHz
SoC GPU Radeon 780M
12 CU up to 2.7 GHz
Radeon 780M
12 CU up to 2.7 GHz
AMD iGPU
12CU @ 2.7 GHz
4CU @ 2.5 GHz
AMD iGPU
8CU @ 1.6 GHz
External GPU ROG XG Mobile (up to RTX 4090) Not officially
Memory Capacity 16/32 GB LPDDR5X 32GB LPDDD5X 16GB LPDDR5-6400 16GB LPDDR5-5500
Storage Capacity 512GB/1TB/2TB/4TB 2TB 512GB/256GB 256GB/512GB SSD
64GB eMMC
Storage Type M.2 NVMe 2280 SSD PCIe 4×4 M.2 NVMe 2280 SSD PCIe 4×4 M.2 NVMe 2230 SSD PCIe 4×4 M.2 NVMe 2230 SSD PCIe 3×4
eMMC PCIe Gen2x1
Display 5.5″ 1920×1080, 60Hz, 350 nits, AMOLED 5.5″ 1920×1080, 60Hz, 350 nits, AMOLED 7″ 1920×1080, 120Hz (VRR), 500 nits, 7ms 7″ 1280×800, 60Hz
Connectivity WiFi 6E, BT 5.2 WiFi 6E, BT 5.2 Wi-Fi AX, BT 5.2 Wi-Fi AC, BT 5
Battery 38 Wh, 10050 mAh 28 Wh, 7350 mAh 40 Whr, 4S1P, 4-cell Li-ion 40 Whr
Weight 450 g 405 g 608 g 669 g
Dimensions 22.4 x 8.9 x 21.6 cm 22.4 x 8.9 x 1.8 cm 28.0 x 11.3 x 3.9 cm 29.8 x 11.7 x 4.9 cm
OS Windows 11 Windows 11 Windows 11 Steam OS/Win 11
Retail Price Ryzen 7 7840U MSRP:
$899 (16G+512GB)
Ryzen 7 8840U MSRP:
TBC
$1,129 $699/€799 (Z1E+16G+512GB)
$599/€699 (Z1+16G+256GB)
$649/€679 (16G+512GB)
$529/€549 (16G+256GB)
$399/€419 (16G+64GB)
Release Date 7840U: July 2023
8840U: TBC 2024
July 2023 Q3 2023
June 13th, 2023
February 2022

AYANEO has only presented the new system today, with no release date or pricing information provided. Given the upgrade to the Ryzen 7 8840U, one should expect it to be pricier than the previous model, which was priced at $899 when it came out in the most basic configuration.

Source: AYANEO PR



長江存儲 宣佈 ,推出PC41Q,是其面向商用PC客户打造的首款QLC SSD產品。長江存儲表示,隨着AI大模型等應用的不斷髮展,市場對大容量、高性能SSD的需求持續增加。QLC NAND技術憑藉每個存儲單元4 bit特性,可為存儲市場提供兼顧高性能、高可靠性與大容量的高性價比存儲解決方案。

PC41Q採用無緩存DRAM-Less設計,HMB方案,內置固定SLC Cache及動態SLC Cache,搭載了晶棧3.0(Xtacking 3.0)架構的第四代3D QLC NAND閃存芯片,具備高性能、高耐久度、高可靠性、低功耗等特性。其最大順序讀取速度為5500MB/s,擁有2400MT/s的單顆芯片接口速度,性能、密度和耐久度全面提升,並縮短了設備的響應時長。

PC41Q採用了PCIe 4.0 x4接口,擁有M.2 2242和M.2 2280兩種外形規格,提供了512GB、1TB、2TB等多容量選擇。新產品廣泛適配筆記本電腦、超薄本、台式機和一體機等各種電腦終端,滿足了不同用户羣體,以及商務辦公、日常生活娛樂、內容創作等應用場景的使用需求,並降低了客户和用户的成本,提供更靈活的選擇。

得益於完善的內置功耗管理機制,PC41Q在PS4模式下待機功耗低至2mW,工作功耗4W,可顯著降低設備發熱,延長使用時間,提升用户體驗。長江存儲稱,PC41Q擁有高達1.75MB/s/mW的性能功耗比,能提供更出眾的能效表現。

又有台灣半導體業遭遇網路攻擊,在5月17日(週五)傍晚17時23分,上櫃半導體業者逸昌科技發布資安事件重大訊息,説明公司內有部份資訊系統遭受駭客攻擊。

由於今年1月鴻海集團旗下半導體大廠京鼎精密遭網路攻擊,如今再有逸昌科技發生網路資安事件的消息,成為國內半導體公司都關注的焦點。

基本上,逸昌科技這家半導體公司,主要從事晶圓測試、晶圓雷射修整製造、 IC 最終測試、外觀檢查、捲帶包裝、以及代客出貨等服務。

對於這次資安事件,根據該公司的説明,他們發現網路傳輸異常,以及部份伺服器遭受駭客攻擊的情況。而他們的資安單位已立即啟動防禦復原機制,並與外部資安公司技術專家合作處理。

值得留意的是,他們表示目前資訊系統陸續恢復中,意味著公告當下的週五傍晚,這些受影響的資訊系統尚未完全復原。

Microsoft’s reportedly targeting a late 2026 launch for their next Xbox with Infinity Ward working on a launch title We already know what Microsoft are working on their next-generation Xbox. The company has already promised “the largest technical leap ever seen in a hardware generation”, and that means a new console is on the horizon. […]

The post Expect the next Xbox in late 2026 with COD as a launch title appeared first on OC3D .

TerraMaster's D8 Hybrid combines hard disk drives and M.2 NVMe drives in a USB Type-C connected DAS that can also be used to add more drives to a NAS. Thanks to the 10 Gbps USB interface, it is able to outperform any Gigabit Ethernet NAS.

Although TSMC can't claim to be the first fab to use extreme UV (EUV) lithography – that title goes to Samsung – they do get to claim to be the largest. As a result, the company has developed significant experience with EUV over the years, allowing TSMC to refine how they use EUV tooling to both improve productivity/uptime, and to cut down on the costs of using the ultra-fine tools. As part of the company's European Technology Symposium this week, they went into a bit more detail on their EUV usage history, and their progress on further integrating EUV into future process nodes.

When TSMC started making chips using EUV lithography in 2019 on its N7+ process (for Huawei's HiSilicon), it held 42% of the world's installed base of EUV tools, and even as ASML ramped up shipments of EUV scanners in 2020, TSMC's share of EUV installations actually increased to 50% . And jumping ahead to 2024, where the number of EUV litho systems at TSMC has increased by 10-fold from 2019, TSMC is now 56% of the global EUV installed base, despite Samsung and Intel ramping up their own EUV production. Suffice it to say, TSMC made a decision to go in hard on EUV early on, and as a result they still have the lion's share of EUV scanners today.

Notably, TSMC's EUV wafer production has increased by an even larger factor; TSMC now pumps out 30 times as many EUV wafers as they did in 2019. Compared to the mere 10x increase in tools, TSMC's 30x jump in production underscores how TSMC has been able to increase their EUV productivity, reduce service times, and fewer tool downtimes overall. Apparently, this has all been accomplished using the company's in-house developed innovations.

TSMC's Leadership in EUV High Volume Manufacturing
Data by TSMC (Compiled by AnandTech)
2019 2023
Cumulative Tools 1X 10X
Share of Global EUV Installed Base 42% 56%
EUV Wafer Output 1X 30X
Wafer per Day per EUV Tool 1X 2X
Reticle Particle Contamination 1X 0.1X

TSMC says that it has managed to increase wafer-per-day-per-tool productivity of its EUV systems by two times since 2019. To do so, the company optimized the EUV exposure dose and the photoresist it uses. In addition, TSMC greatly refined its pellicles for EUV reticles , which increased their lifespan by four times (i.e., increases uptime), increased output per pellicle by 4.5 times, and lowered defectivity by massive 80 times (i.e., improves productivity and increases uptime). For obvious reasons, TSMC does not disclose how it managed to improve its pellicle technology so significantly, but perhaps over time the company's engineers are going to share this with academia.

TSMC's EUV Pellicle Technology vs. Commercial
Data by TSMC (Compiled by AnandTech)
Commercial TSMC (Claimed)
Output 1X 4.5X
Defectivity 1X 0.0125X
Lifespan 1X 4X

EUV lithography systems are also notorious for their power consumption. So, in addition to improving productivity of EUV tools, the company also managed to reduce the power consumption of its EUV scanners by 24% through undisclosed 'innovative energy saving techniques.' And the company isn't done there: they are planning to improve energy efficiency per wafer per EUV tool by 1.5 times by 2030.

Considering all the refinements that TSMC has managed to achieve with Low-NA EUV lithography by now, it is not terribly surprising that the company is quite confident that it can continue to produce cutting-edge chips in the future. Whereas rival Intel has gone all-in on High-NA EUV for their future, sub-18A nodes, TSMC is looking to leverage their highly-optimized and time-tested Low-NA EUV tooling instead, avoiding the potential pitfalls of a major technology transition so soon while also reaping the cost benefits of using the well-established tooling.

EK, the leading computer cooling solutions provider, is introducing the EK-Pro GPU WB RTX 4090 WindForce V2 - Nickel + Inox, an enterprise-grade full-cover water block designed specifically for the NVIDIA GeForce RTX 4090 WindForce V2 24G graphics card. The water block features a CNC-machined nickel-plated copper base paired with a precision laser-cut stainless-steel top, ensuring both durability and superior liquid cooling efficiency. It fully covers and cools the GPU, VRAM, and VRM components by directing the cooling liquid over these essential areas to efficiently dissipate heat.

Utilizing the Open Split-Flow cooling engine design, this water block delivers exceptional cooling performance. It boasts minimal hydraulic flow restriction, making it compatible with lower-powered water pumps or those running at reduced speeds without compromising efficiency. The jet plate and fin structure geometry are meticulously optimized to ensure uniform flow distribution with minimal loss, achieving optimal cooling efficiency. Remarkably, this water block maintains outstanding cooling performance even when the water flow is reversed, consistently surpassing expectations. It also features a single-slot I/O bracket.
主頁 最新 台連江縣長訪福州 商馬祖福...

  • 台灣的國民黨籍連江縣縣長王忠銘率團赴大陸福州參加第26屆海峽兩岸經貿交易會及第七次福馬磋商會
  • 王忠銘透露雙方在會上就馬祖與福建「通水、通橋」進行磋商,並初步規劃出大陸琅岐至馬祖南竿的橋樑路線
  • 雙方又簽署了「福馬濕地生態保護」、「福馬旅遊合作」、「福馬生態養殖產業」三份共識書;有報道就指開放大陸旅客赴馬祖旅遊可能即將成事

台灣的國民黨籍連江縣縣長王忠銘率團赴大陸福州參加第26屆海峽兩岸經貿交易會(簡稱海交會),其中「第七次福馬磋商會」周五(5月17日)舉行,與會雙方在會上就優化小三通,加上通橋、通水的「新四通」等議題進行磋商。

據台灣《聯合報》報道,來自大陸福建及馬祖雙方的代表在會上就產業合作、生態保護、青年交流、文旅合作、海上客貨運航線發展等12個議題進行磋商,並達成了多項共識。雙方又舉行了「福馬濕地生態保護」、「福馬旅遊合作」、「福馬生態養殖產業」三份共識書的互換儀式。此外,此次福馬雙方亦達成了涵蓋旅遊資源、健康旅遊、文旅資訊交流、旅遊服務標準和溝通機制合作等五項共識。《聯合報》報道又引述消息指,開放大陸旅客赴馬祖旅遊可能即將成事,而大陸旅行社職員赴馬祖景點考察的「踩線團」可能在5月20日賴清德就職典禮之後出發。

而率團到福州的王忠銘就表示期望透過本次機會,為馬祖企業拓展大陸市場,提升馬祖旅遊特產知名度,及吸引更多的大陸遊客到馬祖觀光。他在社交平台Facebook上寫道,福馬磋商會「已成為兩岸交流重要橋梁」,雙方在會中深入瞭解彼此的需求和優勢,尋求共同合作機遇,力求達成互利共贏的合作協議。

他又透露在磋商會上,雙方就邀請福州藝術團體到馬祖演出、建立兩岸文化交流平台、聯合打撈海上漂流物及恢復馬祖海域共同魚苗放流與漁業合作等議題進行討論。雙方又談及馬祖南竿福澳、北竿白沙分別至大陸福建省浪岐、黃岐的小三通服務措施改善。

王忠銘又稱其就馬祖與大陸通水、通橋之「新四通」可行性方案與福州一方進行協商。他說,在通水部分,短期方案會研究由大陸一方的黃歧鎮碼頭,以船舶運水赴馬,並就此制定流程,以方便馬祖在有缺水風險時可申請執行。而長期方案則建議由大陸方面投資興建海底供水管路,馬祖一方以買水合約方式進行。

至於通橋部分,王指現階段初步達成大陸琅岐至馬祖南竿為規劃路線,並會以台灣桃園市結構技師公會作為雙方工程技術資訊交流及共享的窗口,透過技術交流平台的建立,將方案推向更成熟與完備,以加速推動落實進程。

王忠銘所轄台灣一方的連江縣主要包括馬祖列島,與大陸最近距離不到十公里,並與福建省省會福州市毗鄰。嚴格而言,台灣一方的連江縣「並不完整」,因為連江縣除馬祖外,以往亦轄有大陸部份領土。目前中共一方亦有福建省福州市連江縣管轄其治下的大陸部份,因此連江縣是1949年之後唯一一個被海峽兩岸政權分治的縣。馬祖與金門一樣,在名義上屬中華民國福建省管轄,即使中華民國福建省因現實原因已被虛級化而不再設有實際的政府機構。

新聞來源:聯合報、中央社

支持我們: https://points-media.com/supports/

來自日本的玩家對於 Ubisoft 最新發布的遊戲《刺客教條:暗影者》表示出明顯的不滿。這款遊戲最近發布的預告片揭示了其設置在日本的故事背景,但其文化呈現引起了爭議。

遊戲中包含兩個主要角色,其中一名為日本女忍者直江,另一名則為虛構的非洲人靖介,他在日本的武士社會中得到了大名織田信長的接納。這種設定引來了日本玩家的批評,因為他們期望更多的日本角色和文化元素的真實呈現。

許多玩家在社交媒體和論壇上表達了他們的不滿,他們批評遊戲中的許多文化細節,例如角色的服裝和武器的使用方式,認為這些不符合日本的歷史和文化。有的玩家甚至認為這款遊戲是對日本文化的錯誤解讀和挪用。

此外,一些玩家表示,遊戲似乎更多地是為西方觀眾設計,而不真正考慮到日本玩家的感受和期望。這引起了一系列的討論,關於遊戲開發者在創建涉及特定文化背景的遊戲時應承擔的責任和敏感性。

Ubisoft 尚未對這些批評作出回應。隨著遊戲發布日期的臨近,社群中的這些反應可能會對遊戲的最終接受度產生影響。

資料來源: Gamereactor

這篇文章價值 58900元台幣,約美金1800USD。沒錯,就是一顆Nikon 24-70mm F2.8S全新的價格。看到文章的您,賺到了優~~~如果您是維修技師,那就一篇超值的文章。

這篇文章 【NRC】Nikon 24-70mm F2.8S 深度解析 最早出現於 攝影札記 Photoblog - 新奇好玩的攝影資訊、攝影技巧教學

普丁在北京受到的隆重接待清楚地表明了兩國關係的重要性。普丁稱讚中國作為俄羅斯頭號貿易夥伴所起的作用,習近平則將俄羅斯視為制衡美國的重要力量。

Sergei Bobylev/Sputnik

中國領導人習近平在北京為俄羅斯總統普丁舉行儀式歡迎,圖片由俄羅斯官方媒體提供。

I’m excited to announce that we’re developing a new set of Code Editor features to help school teachers run text-based coding lessons with their students. New Code Editor features for teaching Last year we released our free Code Editor and made it available as an open source project. Right now we’re developing a new set…

The post Introducing classroom management to the Code Editor appeared first on Raspberry Pi Foundation .

Plus, NDP-Liberals building homes at a snail's pace, hard drugs recriminalized in B.C., attending the National Prayer Breakfast, and hearing back from my constituents.

Windows Terminal is back with another preview release! Windows Terminal Preview 1.21 introduces long-awaited features like Buffer Restore and fontfall back as well as new experimental features like Scratchpad and the ability to load up an image as a texture. There’s also a LOT MORE stuff so check out the rest of this blog post to learn more!

The post Windows Terminal Preview 1.21 Release appeared first on Windows Command Line .

Downloads: Windows: x64 Arm64 | Mac: Universal Intel silicon | Linux: deb rpm tarball Arm snap


Welcome to the April 2024 release of Visual Studio Code. There are many updates in this version that we hope you'll like, some of the key highlights include:

If you'd like to read these release notes online, go to Updates on code.visualstudio.com . Insiders: Want to try new features as soon as possible? You can download the nightly Insiders build and try the latest updates as soon as they are available.

Accessibility

Progress accessibility signal

The setting, accessibility.signals.progress , enables screen reader users to hear progress anywhere a progress bar is presented in the user interface. The signal plays after three seconds have elapsed, and then loops every five seconds until completion of the progress bar. Examples of when a signal might play are: when searching a workspace, while a chat response is pending, when a notebook cell is running, and more.

Improved editor accessibility signals

There are now separate accessibility signals for when a line has an error or warning, or when the cursor is on an error or warning.

We support customizing the delay of accessibility signals when navigating between lines and columns in the editor separately. Also, aria alert signals have a higher delay before playing them than audio cue signals.

Inline suggestions no longer trigger an accessibility signal while the suggest control is shown.

Accessible View

The Accessible View ( ⌥F2 (Windows Alt+F2 , Linux Shift+Alt+F2 ) ) enables screen reader users to inspect workbench features.

Terminal improvements

Now, when you navigate to the next ( ⌥↓ (Windows, Linux Alt+Down ) ) or previous ( ⌥↑ (Windows, Linux Alt+Up ) ) command in the terminal Accessible View, you can hear if the current command failed. This functionality can be toggled with the setting accessibility.signals.terminalCommandFailed .

When this view is opened from a terminal with shell integration enabled, VS Code alerts with the terminal command line for an improved experience.

Chat code block navigation

When you're in the Accessible View for a chat response, you can now navigate between next ( ⌥⌘PageDown (Windows, Linux Ctrl+Alt+PageDown ) ) and previous ( ⌥⌘PageUp (Windows, Linux Ctrl+Alt+PageUp ) ) code blocks.

Comments view

When there is an extension installed that is providing comments and the Comments view is focused, you can inspect and navigate between the comments in the view from within the Accessible View. Extension-provided actions that are available on the comments can also be executed from the Accessible View.

Workbench

Language model usage reporting

For extensions that use the language model, you can now track their language model usage in the Extension Editor and Runtime Extensions Editor. For example, you can view the number of language model requests, as demonstrated for the Copilot Chat extension in the following screenshot:

Local workspace extensions

Local workspace extensions, first introduced in the VS Code 1.88 release , is generally available. You can now include an extension directly in your workspace and install it only for that workspace. This feature is designed to cater to your specific workspace needs and provide a more tailored development experience.

To use this feature, you need to have your extension in the .vscode/extensions folder within your workspace. VS Code then shows this extension in the Workspace Recommendations section of the Extensions view, from where users can install it. VS Code installs this extension only for that workspace. A local workspace extension requires the user to trust the workspace before installing and running this extension.

For instance, consider the vscode-selfhost-test-provider extension in the VS Code repository . This extension plugs in test capabilities, enabling contributors to view and run tests directly within the workspace. Following screenshot shows the vscode-selfhost-test-provider extension in the Workspace Recommendations section of the Extensions view and the ability to install it.

Note that you should include the unpacked extension in the .vscode/extensions folder and not the VSIX file. You can also include only sources of the extension and build it as part of your workspace setup.

Custom Editor Labels in Quick Open

Last month, we introduced custom labels , which let you personalize the labels of your editor tabs. This feature is designed to help you more easily distinguish between tabs for files with the same name, such as index.tsx files.

Building on that, we've extended the use of custom labels to Quick Open ( ⌘P (Windows, Linux Ctrl+P ) ). Now, you can search for your files using the custom labels you've created, making file navigation more intuitive.

Customize keybindings

We've made it more straightforward to customize keybindings for user interface actions. Right-click on any action item in your workbench, and select Customize Keybinding . If the action has a when clause, it's automatically included, making it easier to set up your keybindings just the way you need them.

Find in trees keybinding

We have addressed an issue where the Find control was frequently being opened unintentionally for a tree control. For example, when the Find control appears in the Explorer view instead of searching in the editor.

To reduce these accidental activations, we have changed the default keybinding for opening the Find control in a tree control to ⌥⌘F (Windows, Linux Ctrl+Alt+F ) . If you prefer the previous setup, you can easily revert to the original keybinding for the list.find command using the Keyboard Shortcuts editor.

Auto detect system color mode improvements

If you wanted your theme to follow the color mode of your system, you could already do this by enabling the setting window.autoDetectColorScheme .

When enabled, the current theme is defined by the workbench.preferredDarkColorTheme setting when in dark mode, and the workbench.preferredLightColorTheme setting when in light mode.

In that case, the workbench.colorTheme setting is then no longer considered. It is only used when window.autoDetectColorScheme is off.

In this milestone, what's new is that the theme picker dialog ( Preferences: Color Theme command) is now aware of the system color mode. Notice how the theme selection only shows dark themes when the system in in dark mode:

The dialog also has a new button to directly take you to the window.autoDetectColorScheme setting:

In the input editor of the Comments control, pasting a link has the same behavior as pasting a link in a Markdown file. The paste options are shown and you can choose to paste a Markdown link instead of the raw link that you copied.

Source Control

Save/restore open editors when switching branches

This milestone, we have addressed a long-standing feature request to save and restore editors when switching between source control branches. Use the scm.workingSets.enabled setting to enable this feature.

To control the open editors when switching to a branch for the first time, you can use the scm.workingSets.default setting. You select to have no open editors ( empty ), or to use the currently opened editors ( current , the default value).

Dedicated commands for viewing changes

To make it easier to view specific types of changes in the multi-file diff editor, we have added a set of new commands to the command palette: Git: View Staged Changes , Git: View Changes , and Git: View Untracked Changes .

Notebooks

Minimal error renderer

You can use a new layout for the notebook error renderer with the setting notebook.output.minimalErrorRendering . This new layout only displays the error and message, and a control to expand the full error stack into view.

Disabled backups for large notebooks

Periodic file backups are now disabled for large notebook files to reduce the amount of time spent writing the file to disk. The limit can be adjusted with the setting notebook.backup.sizeLimit . We are also experimenting with an option to avoid blocking the renderer while saving the notebook file with notebook.experimental.remoteSave , so that auto-saves can occur without a performance penalty.

Fix for outline/sticky scroll performance regressions

Over the past few months, we have received feedback about performance regressions in the notebook editor. The regressions are difficult to pinpoint and not easily reproducible. Thanks to the community for continuously providing logs and feedback, we could identify that the regressions are coming from the outline and sticky scroll features as we added new features to them. The issues have been fixed in this release.

We appreciate the community's feedback and patience, and we continue to improve Notebook Editor's performance. If you continue to experience performance issues, please don't hesitate to file a new issue in the VS Code repo .

Quick Search enables you to quickly perform a text search across your workspace files. Quick Search is no longer experimental, so give it a try! ✨🔍

Theme: Night Owl Light (preview on vscode.dev )

Note that all Quick Search commands and settings no longer have the "experimental" keyword in their identifier. For example, the command ID workbench.action.experimental.quickTextSearch became workbench.action.quickTextSearch . This might be relevant if you have settings or keybindings that use these old IDs.

Search tree recursive expansion

We have a new context menu option that enables you to recursively open a selected tree node in the search tree.

Theme: Night Owl Light (preview on vscode.dev )

Git Bash shell integration enabled by default

Shell integration for Git Bash is now automatically enabled . This brings many features to Git Bash, such as command navigation , sticky scroll , quick fixes , and more.

Configure middle click to paste

On most Linux distributions, middle-click pastes the selection. Similar behavior can now be enabled on other operating systems by configuring terminal.integrated.middleClickBehavior to paste , which pastes the regular clipboard content on middle-click.

ANSI hyperlinks made via the OSC 8 escape sequence previously supported only http and https protocols but now work with any protocol. By default, only links with the file , http , https , mailto , vscode and vscode-insiders protocols activate for security reasons, but you can add more via the terminal.integrated.allowedLinkSchemes setting.

New icon picker for the terminal

Selecting the change icon from the terminal tab context menu now opens the new icon picker that was built for profiles:

Theme: Sapphire (preview on vscode.dev )

Support for window size reporting

The terminal now responds to the following escape sequence requests:

  • CSI 14 t to report the terminal's window size in pixels
  • CSI 16 t to report the terminal's cell size in pixels
  • CSI 18 t to report the terminal's window size in characters

⚠️ Deprecation of the canvas renderer

The terminal features three different renderers: the DOM renderer, the WebGL renderer, and the canvas renderer. We have wanted to remove the canvas renderer for some time but were blocked by unacceptable performance in the DOM renderer and WebKit not implementing webgl2 . Both of these issues have now been resolved!

This release, we removed the canvas renderer from the fallback chain so it's only enabled when the terminal.integrated.gpuAcceleration setting is explicitly set to "canvas" . We plan to remove the canvas renderer entirely in the next release. Please let us know if you have issues when terminal.integrated.gpuAcceleration is set to both "on" or "off" .

Debug

JavaScript Debugger

The JavaScript debugger now automatically looks for binaries that appear in the node_modules/.bin folder in the runtimeExecutable configuration. Now, it resolves them by name automatically.

Notice in the following example that you can just reference mocha , without having to specify the full path to the binary.

{
	"name": "Run Tests",
	"type": "node",
	"request": "launch",
-	"runtimeExecutable": "${workspaceFolder}/node_modules/.bin/mocha",
-	"windows": {
-		"runtimeExecutable": "${workspaceFolder}/node_modules/.bin/mocha.cmd"
-	},
+	"runtimeExecutable": "mocha",
}

Languages

Image previews in Markdown path completions

VS Code's built-in Markdown tooling provides path completions for links and images in your Markdown. When completing a path to an image or video file, we now show a small preview directly in the completion details . This can help you find the image or video you're after more easily.

Hover to preview images and videos in Markdown

Want a quick preview of an image or video in some Markdown without opening the full Markdown preview ? Now you can hover over an image or video path to see a small preview of it:

Improved Markdown header renaming

Did you know that VS Code's built-in Markdown support lets you rename headers using F2 ? This is useful because it also automatically updates all links to that header . This iteration, we improved handling of renaming in cases where a Markdown file has duplicated headers.

Consider the Markdown file:

# Readme
- [Example 1](#_example)
- [Example 2](#_example-1)

## Example
...

## Example
...

The two ## Example headers have the same text but can each be linked to individually by using a unique ID ( #example and #example-1 ). Previously, if you renamed the first ## Example header to ## First Example , the #example link would be correctly changed to #first-example but the #example-1 link would not be changed. However, #example-1 is no longer a valid link after the rename because there are no longer duplicated ## Example headers.

We now correctly handle this scenario. If you rename the first ## Example header to ## First Example in the document above for instance, the new document will be:

# Readme
- [Example 1](#_first-example)
- [Example 2](#_example)

## First Example
...

## Example
...

Notice how both links have now been automatically updated, so that they both remain valid!

Remote Development

The Remote Development extensions , allow you to use a Dev Container , remote machine via SSH or Remote Tunnels , or the Windows Subsystem for Linux (WSL) as a full-featured development environment.

Highlights include:

  • Connect to WSL over SSH

You can learn more about these features in the Remote Development release notes .

Contributions to extensions

GitHub Copilot

Terminal inline chat

Terminal inline chat is now the default experience in the terminal. Use the ⌘I (Windows, Linux Ctrl+I ) keyboard shortcut when the terminal is focused to bring it up.

The terminal inline chat uses the @terminal chat participant, which has context about the integrated terminal's shell and its contents.

Once a command is suggested, use ⌘Enter (Windows, Linux Ctrl+Enter ) to run the command in the terminal or ⌥Enter (Windows, Linux Alt+Enter ) to insert the command into the terminal. The command can also be edited directly in Copilot's response before running it (currently Ctrl+down , Tab , Tab on Windows & Linux, Cmd+down , Tab , Tab on macOS).

Copilot powered rename suggestions button

Copilot-powered rename suggestions can now be triggered by using the sparkle icon in the rename control.

Content Exclusions

GitHub Copilot Content Exclusions is now supported in Copilot Chat for all Copilot for Business and Copilot Enterprise customers. Information on configuring content exclusions can be found on the GitHub Docs .

When a file is excluded by content exclusions, Copilot Chat is unable to see the contents or the path of the file, and it's not used in generating an LLM suggestion.

Preview: Generate in Notebook Editor

We now support inserting new cells with inline chat activated automatically in the notebook editor. We show a Generate button on the notebook toolbar and the insert toolbar between cells, when the notebook.experimental.generate setting is set to true . It can also be triggered by pressing Cmd+I on macOS (or Ctrl+I on Windows/Linux), when the focus is on the notebook list or cell container. This feature can help simplify the process of generating code in new cells with the help of the language model.

Python

"Implement all inherited abstract classes" code action

Working with abstract classes is now easier when using Pylance. When defining a new class that inherits from an abstract one, you can now use the Implement all inherited abstract classes code action to automatically implement all abstract methods and properties from the parent class:

Theme: Catppuccin Macchiato (preview on vscode.dev )

New auto indentation setting

Previously, Pylance's auto indentation behavior was controlled through the editor.formatOnType setting, which used to be problematic if you wanted to disable auto indentation, but enable format on type with other supported tools. To solve this problem, Pylance has its own setting to control its auto indentation behavior: python.analysis.autoIndent , which is enabled by default.

Debugpy removed from the Python extension in favor of the Python Debugger extension

Now that debugging functionality is handled by the Python Debugger extension, we have removed debugpy from the Python extension.

As part of this change, "type": "python" and "type": "debugpy" specified in your launch.json file will both reference the path to the Python Debugger extension, requiring no changes needed to your launch.json files in order to run and debug effectively. Moving forward, we recommend using "type": "debugpy" as this directly corresponds to the Python Debugger extension.

Socket disablement now possible during testing

You can now run tests with socket disablement from the testing UI on the Python Testing Rewrite. This is made possible by a switch in the communication between the Python extension and the test run subprocess to now use named-pipes.

Minor testing bugs updated

Test view now displays projects using testscenarios with unittest and parameterized tests inside nested classes correctly. Additionally, the Test explorer now handles tests in workspaces with symlinks, specifically workspace roots that are children of symlink-ed paths, which is particularly helpful in WSL scenarios.

Performance improvements with Pylance

The Pylance team has been receiving feedback that Pylance's performance has degraded in the past few releases. We have made several smaller improvements in memory consumption and indexing performance to address various reported issues. However, for those who might still be experiencing performance issues with Pylance, we are kindly requesting for issues to be filed through the Pylance: Report Issue command from the Command Palette, ideally with logs, code samples and/or the packages that are installed in the working environment.

Hex Editor

The hex editor now has an insert mode, in addition to its longstanding "replace" mode. The insert mode enables new bytes to be added within and at the end of files, and it can be toggled using the Insert key or from the status bar.

The hex editor now also shows the currently hovered byte in the status bar.

GitHub Pull Requests

There has been more progress on the GitHub Pull Requests extension, which enables you to work on, create, and manage pull requests and issues. New features include:

  • Experimental conflict resolution for non-checked out PRs is available when enabled by the hidden setting "githubPullRequests.experimentalUpdateBranchWithGitHub": true . This feature enables you to resolve conflicts in a PR without checking out the branch locally. The feature is still experimental and will not work in all cases.
  • There's an Accessibility Help Dialog that shows when Open Accessibility Help is triggered from the Pull Requests and Issues views.
  • All review action buttons show in the Active Pull Request sidebar view when there's enough space.

Review the changelog for the 0.88.0 release of the extension to learn about the other highlights.

TypeScript

File watching handled by VS Code core

A new experimental setting typescript.tsserver.experimental.useVsCodeWatcher controls if the TS extension is using VS Code's core file watching support for file watching needs. TS makes extensive use of file watching, usually with their own node.js based implementation. By using VS Code's file watcher, watching should be more efficient, more reliable, and consume less resources. We plan to gradually enable this feature for users in May and monitor for regressions.

Preview Features

VS Code-native intellisense for PowerShell

We've had a prototype for PowerShell intellisense inside the terminal for some time now, that we only recently got some more time to invest in polishing up. This is what it looks like:

Currently, it triggers on the - character or when ctrl+space is pressed. To enable this feature, set "terminal.integrated.shellIntegration.suggestEnabled": true in your settings.json file (it won't show up in the settings UI currently).

It's still early for this feature but we'd love to hear your feedback on it. Some of the bigger things we have planned for it are to make triggering it more reliable ( #211222 ), make the suggestions more consistent regardless of where the popup is triggered ( #211364 ), and bringing the experience as close to the editor intellisense experience as possible ( #211076 , #211194 ).

Say, you're writing some Markdown documentation and you realize that one section of the doc actually belongs somewhere else. So, you copy and paste it over into another file. All good, right? Well if the copied text contained any relative path links, reference links, or images, then these will likely now be broken, and you'll have to fix them up manually. This can be a real pain, but thankfully the new Update Links on Paste is here to help!

To enable this functionality, just set "markdown.experimental.updateLinksOnPaste": true . Once enabled, when you copy and paste text between Markdown files in the current editor, VS Code automatically fixes all relative path links, reference links, and all images/videos with relative paths.

After pasting, if you realize that you instead want to insert the exact text you copied, you can use the paste control to switch back to normal copy/paste behavior.

Support for TypeScript 5.5

We now support the TypeScript 5.5 beta. Check out the TypeScript 5.5 beta blog post and iteration plan for details on this release.

Editor highlights include:

  • Syntax checks for regular expressions.
  • File watching improvements.

To start using the TypeScript 5.5 beta, install the TypeScript Nightly extension . Please share feedback and let us know if you run into any bugs with TypeScript 5.5.

API

Improved support for language features in comment input editors

When writing a new comment, VS Code creates a stripped down text editor, which is backed by a TextDocument , just like the main editors in VS Code are. This iteration, we've enabled some additional API features in these comment text editors. This includes:

  • Support for workspace edits.
  • Support for diagnostics.
  • Support for the paste-as proposed API.

Comment text documents can be identified by a URI that has the comment scheme.

We're looking forward to seeing what extensions build with this new functionality!

Finalized Window Activity API

The window activity API has been finalized. This API provides a simple additional WindowState.active boolean that extensions can use to determine if the window has recently been interacted with.

vscode.window.onDidChangeWindowState(e => console.log('Is the user active?', e.active));

Proposed APIs

Accessibility Help Dialog for a view

An Accessibility Help Dialog can be added for any extension-contributed view via the accessibilityHelpContent property. With focus in the view, screen reader users hear a hint to open the dialog ( ⌥F1 (Windows Alt+F1 , Linux Shift+Alt+F1 ) ), which contains an overview and helpful commands.

This API is used by the GitHub Pull Request extension's Issues and PR views.

Language model and Chat API

The language model namespace ( vscode.lm ) exports new functions to retrieve language model information and to count tokens for a given string. Those are getLanguageModelInformation and computeTokenLength respectively. You should use these functions to build prompts that are within the limits of a language model.

Note : inline chat is now powered by the upcoming chat particpants API. This also means registerInteractiveEditorSessionProvider is deprecated and will be removed very soon.

Updated document paste proposal

We've continued iterating on the document paste proposed API . This API enables extensions to hook into copy/paste operations in text documents.

Notable changes to the API include:

  • A new resolveDocumentPasteEdit method, which fills in the edit on a paste operation. This should be used if computing the edit takes a long time as it is only called when the paste edit actually needs to be applied.

  • All paste operations now are identified by a DocumentDropOrPasteEditKind . This works much like the existing CodeActionKind and is used in keybindings and settings for paste operations.

The document paste extension sample includes all the latest API changes, so you can test out the API. Be sure to share feedback on the changes and overall API design.

Hover Verbosity Level

This iteration we have added a new proposed API to contract/expand hovers, which is called editorHoverVerbosityLevel . It introduces a new type called the VerboseHover , which has two boolean fields: canIncreaseHoverVerbosity and canDecreaseHoverVerbosity , which signal that a hover verbosity can be increased or decreased. If one of them is set to true, the hover is displayed with + and - icons, which can be used to increase/decrease the hover verbosity.

The proposed API also introduces a new signature for the provideHover method, which takes an additional parameter of type HoverContext . When a hover verbosity request is sent by the user, the hover context is populated with the previous hover, as well as a HoverVerbosityAction , which indicates whether the user would like to increase or decrease the verbosity.

preserveFocus on Extension-triggered TestRuns

There is a proposal for a preserveFocus boolean on test run requests triggered by extensions. Previously, test runs triggered from extension APIs never caused the focus to move into the Test Results view, requiring some extensions to reinvent the wheel to maintain user experience compatibility. This new option can be set on TestRunRequest s, to ask the editor to move focus as if the run was triggered from in-editor.

Notable fixes

  • 209917 Aux window: restore maximised state (Linux, Windows)

Thank you

Last but certainly not least, a big Thank You to the contributors of VS Code.

Issue tracking

Contributions to our issue tracking:

Pull requests

Contributions to vscode :

Contributions to vscode-css-languageservice :

Contributions to vscode-emmet-helper :

Contributions to vscode-eslint :

Contributions to vscode-hexeditor :

Contributions to vscode-json-languageservice :

Contributions to vscode-languageserver-node :

Contributions to vscode-python-debugger :

Contributions to vscode-vsce :

Contributions to language-server-protocol :

Contributions to monaco-editor :

in Blue English / en noir en Français All photos , taken with Sony A7RIV and with minimal post processing are available in full size (61Mpix) in the following Flickr […]

Liwei Guo , Vinicius Carvalho , Anush Moorthy , Aditya Mavlankar , Lishan Zhu

This is the second post in a multi-part series from Netflix. See here for Part 1 which provides an overview of our efforts in rebuilding the Netflix video processing pipeline with microservices. This blog dives into the details of building our Video Encoding Service (VES), and shares our learnings.

Cosmos is the next generation media computing platform at Netflix. Combining microservice architecture with asynchronous workflows and serverless functions, Cosmos aims to modernize Netflix’s media processing pipelines with improved flexibility, efficiency, and developer productivity. In the past few years, the video team within Encoding Technologies (ET) has been working on rebuilding the entire video pipeline on Cosmos.

This new pipeline is composed of a number of microservices, each dedicated to a single functionality. One such microservice is Video Encoding Service (VES). Encoding is an essential component of the video pipeline. At a high level, it takes an ingested mezzanine and encodes it into a video stream that is suitable for Netflix streaming or serves some studio/production use case. In the case of Netflix, there are a number of requirements for this service:

  • Given the wide range of devices from mobile phones to browsers to Smart TVs, multiple codec formats, resolutions, and quality levels need to be supported.
  • Chunked encoding is a must to meet the latency requirements of our business needs, and use cases with different levels of latency sensitivity need to be accommodated.
  • The capability of continuous release is crucial for enabling fast product innovation in both streaming and studio spaces.
  • There is a huge volume of encoding jobs every day. The service needs to be cost-efficient and make the most use of available resources.

In this tech blog, we will walk through how we built VES to achieve the above goals and will share a number of lessons we learned from building microservices. Please note that for simplicity, we have chosen to omit certain Netflix-specific details that are not integral to the primary message of this blog post.

Building Video Encoding Service on Cosmos

A Cosmos microservice consists of three layers: an API layer (Optimus) that takes in requests, a workflow layer (Plato) that orchestrates the media processing flows, and a serverless computing layer (Stratum) that processes the media. These three layers communicate asynchronously through a home-grown, priority-based messaging system called Timestone . We chose Protobuf as the payload format for its high efficiency and mature cross-platform support.

To help service developers get a head start, the Cosmos platform provides a powerful service generator. This generator features an intuitive UI. With a few clicks, it creates a basic yet complete Cosmos service: code repositories for all 3 layers are created; all platform capabilities, including discovery, logging, tracing, etc., are enabled; release pipelines are set up and dashboards are readily accessible. We can immediately start adding video encoding logic and deploy the service to the cloud for experimentation.

Optimus

As the API layer, Optimus serves as the gateway into VES, meaning service users can only interact with VES through Optimus. The defined API interface is a strong contract between VES and the external world. As long as the API is stable, users are shielded from internal changes in VES. This decoupling is instrumental in enabling faster iterations of VES internals.

As a single-purpose service, the API of VES is quite clean. We defined an endpoint encodeVideo that takes an EncodeRequest and returns an EncodeResponse (in an async way through Timestone messages). The EncodeRequest object contains information about the source video as well as the encoding recipe. All the requirements of the encoded video (codec, resolution, etc.) as well as the controls for latency (chunking directives) are exposed through the data model of the encoding recipe.

//protobuf definition 

message EncodeRequest {
VideoSource video_source = 1;//source to be encoded
Recipe recipe = 2; //including encoding format, resolution, etc.
}

message EncodeResponse {
OutputVideo output_video = 1; //encoded video
Error error = 2; //error message (optional)
}

message Recipe {
Codec codec = 1; //including codec format, profile, level, etc.
Resolution resolution = 2;
ChunkingDirectives chunking_directives = 3;
...
}

Like any other Cosmos service, the platform automatically generates an RPC client based on the VES API data model, which users can use to build the request and invoke VES. Once an incoming request is received, Optimus performs validations, and (when applicable) converts the incoming data into an internal data model before passing it to the next layer, Plato.

Like any other Cosmos service, the platform automatically generates an RPC client based on the VES API data model, which users can use to build the request and invoke VES. Once an incoming request is received, Optimus performs validations, and (when applicable) converts the incoming data into an internal data model before passing it to the next layer, Plato.

Plato

The workflow layer, Plato, governs the media processing steps. The Cosmos platform supports two programming paradigms for Plato: forward chaining rule engine and Directed Acyclic Graph (DAG). VES has a linear workflow, so we chose DAG for its simplicity.

In a DAG, the workflow is represented by nodes and edges. Nodes represent stages in the workflow, while edges signify dependencies — a stage is only ready to execute when all its dependencies have been completed. VES requires parallel encoding of video chunks to meet its latency and resilience goals. This workflow-level parallelism is facilitated by the DAG through a MapReduce mode. Nodes can be annotated to indicate this relationship, and a Reduce node will only be triggered when all its associated Map nodes are ready.

For the VES workflow, we defined five Nodes and their associated edges, which are visualized in the following graph:

  • Splitter Node: This node divides the video into chunks based on the chunking directives in the recipe.
  • Encoder Node: This node encodes a video chunk. It is a Map node.
  • Assembler Node: This node stitches the encoded chunks together. It is a Reduce node.
  • Validator Node: This node performs the validation of the encoded video.
  • Notifier Node: This node notifies the API layer once the entire workflow is completed.

In this workflow, nodes such as the Notifier perform very lightweight operations and can be directly executed in the Plato runtime. However, resource-intensive operations need to be delegated to the computing layer (Stratum), or another service. Plato invokes Stratum functions for tasks such as encoding and assembling, where the nodes (Encoder and Assembler) post messages to the corresponding message queues. The Validator node calls another Cosmos service, the Video Validation Service, to validate the assembled encoded video.

Stratum

The computing layer, Stratum, is where media samples can be accessed. Developers of Cosmos services create Stratum Functions to process the media. They can bring their own media processing tools, which are packaged into Docker images of the Functions. These Docker images are then published to our internal Docker registry, part of Titus . In production, Titus automatically scales instances based on the depths of job queues.

VES needs to support encoding source videos into a variety of codec formats, including AVC, AV1, and VP9, to name a few. We use different encoder binaries (referred to simply as “encoders”) for different codec formats. For AVC, a format that is now 20 years old, the encoder is quite stable. On the other hand, the newest addition to Netflix streaming , AV1, is continuously going through active improvements and experimentations, necessitating more frequent encoder upgrades. ​​To effectively manage this variability, we decided to create multiple Stratum Functions, each dedicated to a specific codec format and can be released independently. This approach ensures that upgrading one encoder will not impact the VES service for other codec formats, maintaining stability and performance across the board.

Within the Stratum Function, the Cosmos platform provides abstractions for common media access patterns. Regardless of file formats, sources are uniformly presented as locally mounted frames. Similarly, for output that needs to be persisted in the cloud, the platform presents the process as writing to a local file. All details, such as streaming of bytes and retrying on errors, are abstracted away. With the platform taking care of the complexity of the infrastructure, the essential code for video encoding in the Stratum Function could be as simple as follows.

ffmpeg -i input/source%08d.j2k -vf ... -c:v libx264 ... output/encoding.264

Encoding is a resource-intensive process, and the resources required are closely related to the codec format and the encoding recipe. We conducted benchmarking to understand the resource usage pattern, particularly CPU and RAM, for different encoding recipes. Based on the results, we leveraged the “container shaping” feature from the Cosmos platform.

We defined a number of different “container shapes”, specifying the allocations of resources like CPU and RAM.

# an example definition of container shape
group: containerShapeExample1
resources:
numCpus: 2
memoryInMB: 4000
networkInMbp: 750
diskSizeInMB: 12000

Routing rules are created to assign encoding jobs to different shapes based on the combination of codec format and encoding resolution. This helps the platform perform “bin packing”, thereby maximizing resource utilization.

An example of “bin-packing”. The circles represent CPU cores and the area represents the RAM. This 16-core EC2 instance is packed with 5 encoding containers (rectangles) of 3 different shapes (indicated by different colors).

Continuous Release

After we completed the development and testing of all three layers, VES was launched in production. However, this did not mark the end of our work. Quite the contrary, we believed and still do that a significant part of a service’s value is realized through iterations: supporting new business needs, enhancing performance, and improving resilience. An important piece of our vision was for Cosmos services to have the ability to continuously release code changes to production in a safe manner.

Focusing on a single functionality, code changes pertaining to a single feature addition in VES are generally small and cohesive, making them easy to review. Since callers can only interact with VES through its API, internal code is truly “implementation details” that are safe to change. The explicit API contract limits the test surface of VES. Additionally, the Cosmos platform provides a pyramid -based testing framework to guide developers in creating tests at different levels.

After testing and code review, changes are merged and are ready for release. The release pipeline is fully automated: after the merge, the pipeline checks out code, compiles, builds, runs unit/integration/end-to-end tests as prescribed, and proceeds to full deployment if no issues are encountered. Typically, it takes around 30 minutes from code merge to feature landing (a process that took 2–4 weeks in our previous generation platform!). The short release cycle provides faster feedback to developers and helps them make necessary updates while the context is still fresh.

Screenshot of a release pipeline run in our production environment

When running in production, the service constantly emits metrics and logs. They are collected by the platform to visualize dashboards and to drive monitoring/alerting systems. Metrics deviating too much from the baseline will trigger alerts and can lead to automatic service rollback (when the “canary” feature is enabled).

The Learnings:

VES was the very first microservice that our team built. We started with basic knowledge of microservices and learned a multitude of lessons along the way. These learnings deepened our understanding of microservices and have helped us improve our design choices and decisions.

Define a Proper Service Scope

A principle of microservice architecture is that a service should be built for a single functionality. This sounds straightforward, but what exactly qualifies a “single functionality”? “Encoding video” sounds good but wouldn’t “encode video into the AVC format” be an even more specific single-functionality?

When we started building the VES, we took the approach of creating a separate encoding service for each codec format. While this has advantages such as decoupled workflows, quickly we were overwhelmed by the development overhead. Imagine that a user requested us to add the watermarking capability to the encoding. We needed to make changes to multiple microservices. What is worse, changes in all these services are very similar and essentially we are adding the same code (and tests) again and again. Such kind of repetitive work can easily wear out developers.

The service presented in this blog is our second iteration of VES (yes, we already went through one iteration). In this version, we consolidated encodings for different codec formats into a single service. They share the same API and workflow, while each codec format has its own Stratum Functions. So far this seems to strike a good balance: the common API and workflow reduces code repetition, while separate Stratum Functions guarantee independent evolution of each codec format.

The changes we made are not irreversible. If someday in the future, the encoding of one particular codec format evolves into a totally different workflow, we have the option to spin it off into its own microservice.

Be Pragmatic about Data Modeling

In the beginning, we were very strict about data model separation — we had a strong belief that sharing equates to coupling, and coupling could lead to potential disasters in the future. To avoid this, for each service as well as the three layers within a service, we defined its own data model and built converters to translate between different data models.

We ended up creating multiple data models for aspects such as bit-depth and resolution across our system. To be fair, this does have some merits. For example, our encoding pipeline supports different bit-depths for AVC encoding (8-bit) and AV1 encoding (10-bit). By defining both AVC.BitDepth and AV1.BitDepth , constraints on the bit-depth can be built into the data models. However, it is debatable whether the benefits of this differentiation power outweigh the downsides, namely multiple data model translations.

Eventually, we created a library to host data models for common concepts in the video domain. Examples of such concepts include frame rate, scan type, color space, etc. As you can see, they are extremely common and stable. This “common” data model library is shared across all services owned by the video team, avoiding unnecessary duplications and data conversions. Within each service, additional data models are defined for service-specific objects.

Embrace Service API Changes

This may sound contradictory. We have been saying that an API is a strong contract between the service and its users, and keeping an API stable shields users from internal changes. This is absolutely true. However, none of us had a crystal ball when we were designing the very first version of the service API. It is inevitable that at a certain point, this API becomes inadequate. If we hold the belief that “the API cannot change” too dearly, developers would be forced to find workarounds, which are almost certainly sub-optimal.

There are many great tech articles about gracefully evolving API. We believe we also have a unique advantage: VES is a service internal to Netflix Encoding Technologies (ET). Our two users, the Streaming Workflow Orchestrator and the Studio Workflow Orchestrator, are owned by the workflow team within ET. Our teams share the same contexts and work towards common goals. If we believe updating API is in the best interest of Netflix, we meet with them to seek alignment. Once a consensus to update the API is reached, teams collaborate to ensure a smooth transition.

Stay Tuned…

This is the second part of our tech blog series Rebuilding Netflix Video Pipeline with Microservices. In this post, we described the building process of the Video Encoding Service (VES) in detail as well as our learnings. Our pipeline includes a few other services that we plan to share about as well. Stay tuned for our future blogs on this topic of microservices!


The Making of VES: the Cosmos Microservice for Netflix Video Encoding was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Dr. Richard Brennan, Regional Emergency Director at the World Health Organization tells CNN's John Vause there will be long-term consequences impacting Palestinians across Gaza as thousands of children and elderly face starvation and malnutrition.

日期: 1/10/2019 - 2/10/2019

地點: The Camp - Lake Hawea Holiday Park

是次紐西蘭之旅計劃了到兩處收費營地露營,首先要介紹的是Lake Hawea 營地。由南島中部市鎮Wanaka駕車前往Lake Hawea需時約10-15分鐘。

(Lake Hawea 營地地圖)

Lake Hawea 分為數個營區,除了一般可紮營的營區外,亦劃有Glamping區,汽車露營區(再有細分有電源及無電源),另外亦有Cabins的住宿選擇。

我們租用的是一般營地,費用是每人每晚NZD20。我們到達是紐西蘭的營季剛開始,紮營的遊人不多,訪客皆以Car Camping 為主,故此有大量地方可以選擇,我們選擇了在Lake Hawea湖邊的一處大草落腳。

(Glamping 營區)

(Cabins)

(Caravan Dump Station)

(有洗衣機租用, NZD4 一次, 洗衣粉可在辦事處購買)

(非常乾淨既廁所同浴室, 有熱水供應)

(廚房)

(有書睇)

(仲有得聽收音機同砌砌圖)

(BBQ爐)

(汽水機)

(兒童遊樂場)

(大會指定焚火台)

(越野單車出租)

The Camp | Lake Hawea 基本資料

地址: 1208 Lake Hawea – Makarora Rd, Lake Hawea,  9382

電話: +64 (0)3 443 1767

電郵: [email protected]

網址: https://thecamp.co.nz/

收費: 五月至九月 - 每人每晚NZD15 / 十月 至四月 - 每人每晚NZD20 (詳情以官網資料為準)

交通: 只能自行駕車前往,由Wanaka出發需要約10-15分鐘

補給: 營地旁的小鎮及油站有小型便利店,建議可在Wanada的大型超級市補充物資

網絡: 極差,在香港購買的電話卡完全沒有訊號,營內有wifi,頭30分鐘免費,往後需要收費

設施: 洗手間、浴室(有熱水)、廚房(包括爐具、雪櫃及基本廚具、洗衣機等等

總評: 環境幽美,距離市區不算遠,設施齊全,一家大小去都完全無問題。此處名氣沒有Lake Tekapo大,面積亦較少,商業味較淡,但反而更添一份寧靜及清幽,本人極度推薦。