CES 2023 will showcase the future of AI in computing, but we’ll have 2024 and beyond.
The growth of consumer-facing AI apps like AI Art and ChatGPT will be the most dynamic in 2022. But don’t get too excited — you don’t have to buy a new laptop with a built-in AI combination of AMD and Intel. your wallet for
At the rate that artificial intelligence is evolving, it could be at CES next year.
That’s the point. From the combination of processor architectures that make up AMD’s latest mobile Ryzen chips, XDNA is a new AI hardware architecture that AMD has introduced with the Ryzen Mobile 7040 series, called “Ryzen AI”. (Think of XDNA as RDNA AI running AMD Radeon graphics cores.) Intel has similar plans, though they’re currently using a separate Movidius AI card as a placeholder until the Meteor Lake chip has “built-in” AI. really”. For years, Qualcomm has powered AI technology in ARM-based Snapdragon chips that power most smartphones but run on Windows PCs.
Now, there’s only one reason to buy an AI-powered PC: Windows Studio Effects, the suite of webcam technologies built into Microsoft’s Snapdragon-powered Surface Pro 9 5G. Windows CEO Panos Panay joined AMD CEO Lisa Su to discuss Effects Studio at the XDNA launch event. This technology includes many different functions – built-in background blur, noise filter to remove background noise, eye contact, and auto framing. Everyone uses artificial intelligence in some way; For example, Eye Contact uses your webcam to see where your eyes are, then uses AI to pretend that your eyes are in contact with the camera.
It’s funny but incomprehensible. Instead, Microsoft intelligently uses AI to improve your productivity on the device, helping you feel more alert and productive. Without them, you’ll have to do more: stay focused, make sure you’re focused on the camera, and hold meetings in quiet places.
But AI continues to manifest itself in different ways.
Hardware first. Remember that a computer’s audio accelerator starts in the processor, goes to its own chip/card (RIP, SoundBlaster), and back to the processor. GPUs also follow the same path, but at a higher price point as you’d expect from discrete components. When it comes to AI, no one can be sure: AI graphics applications like Stable Diffusion are designed for PC GPUs but require a lot of video RAM. Users can easily request services like Mid journey or Microsoft’s excellent Designer app to compute AI capabilities in the cloud, but must “pay” through subscriptions or advertising.
It also raises the right questions about the AI applications and services we will see over the next few years. AI has exploded over the past six months, not only in the arts but also in AI-powered chatbots like ChatGPT, which some belief can replace or complement search engines like Bing. The company has been quietly working on learning models and AI server hardware for years and is now reaping the rewards. Big AI tools can take months to train, but models like ChatGPT can now produce high school-grade text. What will the next generation do? We all remember the early days when Pentium chips took the PC mainstream. We don’t know what to do next, but that’s how it feels right now.
Therefore, it can be said that 2022 will be the year AI enters the public consciousness. This year, 2023 seems to be the year when everything shakes: which services are profitable, which services are used, and what society will need in the future. It looks like stable diffusion arms and other AI frameworks have been recorded for Ryzen AI and Movidius hardware, and we have a better understanding of how far XDNA can go and cut compared to RDNA.
It’s an exciting future, but there’s still a future. AMD told PCWorld that the XDNA architecture is built around FPGAs (Field Programmable Gate Arrays), which it acquired after acquiring Xilinx, a technology that allows for rapid hardware configuration. FPGAs can only be understood when they do not fully understand the technology and need to modify and correct errors.
The next chips released in 2024 and 2025 will bring AI into our computing lives: top-down integration, and dedicated logic. who knows, we could use AI to “teach” our digital model so that it can act as an AI agent — for example, schedule a dentist appointment by knowing the schedule and listing your contacts. If we are still using voicemail, we can create a video avatar as a visual voicemail.
On the other hand, people are betting on the metaverse. The desired future of a company is not always like that.
We suspect that you’re in a hurry to buy a new AI-powered computer processor. We advise you to wait now. But we also believe that AI will be part of your computing future, no matter when it arrives.