We are pleased to announce that Tenstorrent NLP API has been integrated into Eden AI.
Tenstorrent develops innovative and powerful AI products for both inference and training, with a focus on NLP and vision models. They offer AI chips designed to efficiently run on a single platform, along with software solutions that cater to ML developers' needs for execution, scaling, and hardware control.
Tenstorrent also provides high-performing RISC-V CPU technology, optimized graph processing accelerators, and customizable computing solutions, including licensing their CPU design IP and partnering for custom designs.
Eden AI offers Tenstorrent NLP solutions on its platform amongst several other technologies. We want our users to have access to multiple AI engines and manage them in one place so they can reach high performance, optimize cost and cover all their needs.
There are many reasons for using multiple AI APIs :
You need to set up an AI API that is requested if and only if the main AI API does not perform well (or is down). You can use the confidence score returned or other methods to check provider accuracy.
After the testing phase, you will be able to build a mapping of AI vendors' performance that depends on the criteria that you chose. Each data that you need to process will be then sent to the best API.
This method allows you to choose the cheapest provider that performs well for your data. Let's imagine that you choose Google Cloud API for customer "A" because they all perform well and this is the cheapest. You will then choose Microsoft Azure for customer "B", a more expensive API but Google performances are not satisfying for customer "B". (this is a random example)
This approach is required if you look for extremely high accuracy. The combination leads to higher costs but allows your AI service to be safe and accurate because AI APIs will validate and invalidate each other for each piece of data.
We had the chance to talk to Shubham Saboo, Tenstorrent's Head of Developer Relations, who agreed to answer some of our questions:
Tenstorrent, under the leadership of industry veteran Jim Keller, builds computers for AI. With a strong focus on innovation and power, Tenstorrent develops the most innovative and powerful AI products in the industry that do both inference and training.
These AI chips are specifically designed to efficiently run both NLP and vision models on a single silicon platform with a single easy-to-use software stack.
Our software approach caters to ML developers by providing seamless execution and scaling, while also offering bare metal access and kernel-level control for developers who require maximum hardware control.
Tenstorrent has also developed the world's most high-performing RISC-V CPU technology that features a modular and composable design that seamlessly integrates with our AI chips.
Tenstorrent has developed AI/ML accelerators specifically optimized for graph processing tasks that provide a unique alternative to conventional GPUs. Our approach to deep learning processing involves mapping different layers of a deep learning graph to different cores on the chip, ensuring optimal utilization and efficient data flow.
Using that methodology, we deliver performant, scalable AI and ML products directly to end customers in the form of cards and systems.
We also provide a cloud service for those who prefer to ‘rent’ vs ‘buy’ their compute capabilities. In addition, we can license our superscalar RISC-V CPU design Intellectual Property to chip producers for productization and integration into systems.
Finally, Tenstorrent can partner for custom-designed, best-in-class computing solutions. Based on our modular chiplet architecture, we can produce AI computers to specification efficiently.
To support these products we will offer two separate software solutions: one designed to take high-level AI frameworks (like PyTorch, TensorFlow, and JAX) and compile them into executable code that runs efficiently on Tenstorrent's hardware. The second is a low-level framework available for those wanting to program at the kernel level and directly access the matrix and vector arithmetic capabilities of our chips.
Our mission is to address the open-source compute demands through industry-leading AI/ML accelerators, high-performing RISC-V CPUs, and infinitely-configurable ML and CPU chiplets.
Our products will scale from small, low-power cards to very large servers allowing customers to deploy in their own specific environment. This variety of product offerings naturally drives a diverse customer base.
Customer industries range from automotive all the way to High-Performance Compute companies; our hardware and software are scalable and customizable to solve specific customer needs.
Neural Networks and AI Models have been exploding over the past couple of years. We have seen a large step function occur in NLP, embedded, and IoT applications. This has been primarily driven by the ability of the hardware to handle increasing dataset sizes and the ability to train large-scale AI models.
As models and datasets have continued to scale, the underlying hardware has needed the ability to both act as an efficient single-chip solution, as well as a scalable architecture, without running into networking bottlenecks or making it power or cost prohibitive.
As the innovation continues to advance, the hardware must be efficient for current implementations and accommodate future AI workloads that we do not know about yet. Our simple goal is to create outstanding products that will evolve with the ever-changing needs of our customers.
We also are committed to open-source products in order to support continued innovation in this fast-paced segment.
Eden AI is a first-of-its-kind platform that offers APIs for nearly all ML tasks under one roof. It’s an easy-to-use interface and streamlined platform for the end users that prompted us to integrate our ML offerings.
Because of our unique hardware and software stack, we are in the position to offer affordable inference solutions for a variety of ML tasks, and through Eden AI, we can directly reach the end users and improve our offerings based on bottom-up feedback. We anticipate this collaboration to help us understand the needs of end users that will help reshape our next generation of products and services.
You'll need some documentation to use Tenstorrent's NLP technologies on Eden AI. Then call the API:
Eden AI is the future of AI usage in companies. Our platform not only allows you to call multiple AI APIs but also gives you :
You can see Eden AI documentation here.
You can directly start building now. If you have any questions, feel free to schedule a call with us!
Get startedContact sales