IBM’s new Telum Processor is the company’s first with an on-chip AI accelerator

The microprocessor contains 8 processor cores and is designed to process sensitive data in hybrid cloud deployments, according to the company.

chip.jpg

Image: GettyImages/gremlin

The new Telum Processor will be the central processor chip for the next generation IBM Z and LinuxONE systems, the company announced Monday. The processor has a dedicated on-chip accelerator for AI inference and the design improves performance, security and availability. The company announced the news at the Hot Chip conference.

More about artificial intelligence

The microprocessor contains 8 processor cores, clocked at more than 5GHz, with each core supported by a redesigned 32MB private level-2 cache. The level-2 caches together form a 256MB virtual Level-3 and 2GB Level-4 cache, according to the company. 

The 1.5x growth of cache per core over the z15 generation is designed to enable a significant increase in both per-thread performance and total capacity, according to the company. The company said in a press release that Telum’s performance improvements will allow rapid response times in complex transaction systems, especially when augmented with real-time AI inference.

SEE: Global chip shortage: How manufacturers can cope over the long term (TechRepublic)

Telum also features security improvements including transparent encryption of main memory. Telum’s Secure Execution improvements are designed to provide increased performance and usability for Hyper Protected Virtual Servers and trusted execution environments, such as processing sensitive data in hybrid cloud deployments.

The predecessor IBM z15 chip was designed to enable seven nines availability (only 3.16 seconds of downtime per year) for IBM Z and LinuxONE systems. IBM said Telum’s redesigned 8-channel memory improves on that chip’s availability due to an interface capable of tolerating complete channel or DIMM failures.

The chip is the first one created by the IBM Research AI Hardware Center. The company expects financial firms to use the chip to prevent fraud instead of simply detecting it.

NVIDIA’s AI chips

NVIDIA announced an “AI-on-5G” machine in June to extend support for Arm-based CPUs in the NVIDIA Aerial A100 platform. The computing platform uses the NVIDIA Aerial software development kit and will incorporate 16 Arm Cortex-A78 processors into the Nvidia BlueField-3 A100. 

SEE: Global chip shortage is hitting close to home (TechRepublic)

According to the company, this is a self-contained, converged card that delivers enterprise edge AI applications over cloud-native 5G vRAN with improved performance per watt and faster time to deployment. BlueField-3 is a next-generation data processing unit built for AI and accelerated computing and optimized for 5G connectivity. The machine includes an NVIDIA A100 GPU, a BlueField 2 data processing chip and a processor, either ARM or x86.

The company announced its A100 AI chip in June 2020. It has 54 billion transistors and can execute 54 petaflops of performance. That chip is the third generation of the company’s’ DGX platform.

Also see

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechiLive.in is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.