Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Microsoft announced Her first local Artificial Intelligence Models on Thursday: Mai-Voice-1 AI and Mai-1-PREVIEW. The company says the new Mai-Voice-1 can generate a precise sound in less than one second on only one graphics processing unit, while the Mai-1-PREVIEW offers a “overview of future offers within Copilot”.
You can try MA1-Voice-1 for yourself On Copilot LaboratorsWhere you can enter what you want to say the artificial intelligence model, as well as changing his voice and style of speaking. In addition to this model, Microsoft Mai-1-PREVIEW, which says it has been trained in about 15,000 NVIDIA H100 graphics processing units. It is designed for users who need an Amnesty International model capable of following instructions and “providing useful responses for daily information”.
Microsoft a Mustafa Soleyman Saidafa During an episode of Decoder Last year, the company’s the company’s internal models do not focus on the cases of institutions. Suleiman said: “My logical is that we have to create something very well for the consumer and really improve our use.” “Therefore, we have huge amounts of predictive and very useful data on the advertising side, and on the measurement of the consumer distance, and so on. My focus is on building models that are really working with the consumer companion.”
The company plans to launch the MAI-1-PREVIEW of some cases of text use in Copilot AI, which is currently dependent on large language models in Openai. She has also started testing the Mai-1-PREVIEW model on the AI LMARNA standards platform.
“We have great ambitions for the place we go after,” Microsoft writes in the blog post. “We will not only follow more progress here, but we believe that coordinating a set of specialized models that serve different user intentions and use cases will open a huge value.”