onnxruntime (@onnxruntime) 's Twitter Profile
onnxruntime

@onnxruntime

Cross-platform training and inferencing accelerator for machine learning models.

ID: 1041831598415523841

linkhttp://onnxruntime.ai calendar_today17-09-2018 23:29:44

283 Tweet

1,1K Followers

42 Following

Anton Lozhkov (@anton_lozhkov) 's Twitter Profile Photo

🏭 The hardware optimization floodgates are open!🔥 Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline for Stable Diffusion 🎨 To find out how to export your own checkpoint and run it with onnxruntime, check the release notes: github.com/huggingface/di…

🏭 The hardware optimization floodgates are open!🔥

Diffusers 0.3.0 supports an experimental ONNX exporter and pipeline for Stable Diffusion 🎨 

To find out how to export your own checkpoint and run it with <a href="/onnxruntime/">onnxruntime</a>, check the release notes: 

github.com/huggingface/di…
OverNet Education (@overnete) 's Twitter Profile Photo

💡Senior Research & Development Engineer per Deltatre, @tinux80 è anche #MicrosoftMVP e Intel Software Innovator. 📊Non perderti il suo speech su #AzureML e #Onnx Runtime a #WPC2022! 👉𝐀𝐜𝐪𝐮𝐢𝐬𝐭𝐚 𝐢𝐥 𝐭𝐮𝐨 𝐛𝐢𝐠𝐥𝐢𝐞𝐭𝐭𝐨: wpc2022.eventbrite.it Microsoft Italia

💡Senior Research &amp; Development Engineer per <a href="/deltatre/">Deltatre</a>, @tinux80 è anche #MicrosoftMVP e Intel Software Innovator.
📊Non perderti il suo speech su #AzureML e #Onnx Runtime a  #WPC2022!
👉𝐀𝐜𝐪𝐮𝐢𝐬𝐭𝐚 𝐢𝐥 𝐭𝐮𝐨 𝐛𝐢𝐠𝐥𝐢𝐞𝐭𝐭𝐨: wpc2022.eventbrite.it
<a href="/microsofitalia/">Microsoft Italia</a>
Santosh Dahal (@exendahal) 's Twitter Profile Photo

Gerald Versluis What about a video on ONNX runtime? Here is the official documentation devblogs.microsoft.com/xamarin/machin… And MAUI example: github.com/microsoft/onnx…

Open at Microsoft (@openatmicrosoft) 's Twitter Profile Photo

The natural language processing library Apache OpenNLP is now integrated with ONNX Runtime! Get the details and a tutorial explaining its use on the blog: msft.it/6013jfemt #OpenSource

The natural language processing library Apache OpenNLP is now integrated with ONNX Runtime! Get the details and a tutorial explaining its use on the blog: msft.it/6013jfemt #OpenSource
onnxruntime (@onnxruntime) 's Twitter Profile Photo

In this article, a community member used #ONNXRuntime to try out GPT-2 model which generates English sentences from Ruby language: dev.to/kojix2/text-ge…

Loreto Parisi (@loretoparisi) 's Twitter Profile Photo

Finally tokenization with Sentence Piece BPE now works as expected in #NodeJS #JavaScript with tokenizers library 🚀! Now getting "invalid expand shape" errors when passing text tokens' encoded ids to the MiniLM onnxruntime converted Microsoft Research model huggingface.co/microsoft/Mult…

onnxruntime (@onnxruntime) 's Twitter Profile Photo

📣The new version of #ONNXRuntime v1.13.0 was just released!!! Check out the release note and video from the engineering team to learn more about what was in this release! 📝github.com/microsoft/onnx… 📽️youtu.be/vo9vlR-TRK4

efxmarty (@efxmarty) 's Twitter Profile Photo

Want to use TensorRT as your inference engine for its speedups on GPU but don't want to go into the compilation hassle? We've got you covered with 🤗 Optimum! With one line, leverage TensorRT through onnxruntime! Check out more at hf.co/docs/optimum/o…

Want to use TensorRT as your inference engine for its speedups on GPU but don't want to go into the compilation hassle? We've got you covered with 🤗 Optimum! With one line, leverage TensorRT through <a href="/onnxruntime/">onnxruntime</a>! Check out more at hf.co/docs/optimum/o…
Jingya Huang (@jhuaplin) 's Twitter Profile Photo

Imagine the frustration of, after applying optimization tricks, finding that the data copying to GPU slows down your "MUST-BE-FAST" inference...🥵 🤗 Optimum v1.5.0 added onnxruntime IOBinding support to reduce your memory footprint. 👀 github.com/huggingface/op… More ⬇️

ONNX (@onnxai) 's Twitter Profile Photo

We are seeking your input to shape the ONNX roadmap! Proposals are being collected until January 24, 2023 and will be discussed in February. Submit your ideas at forms.microsoft.com/pages/response…

onnxruntime (@onnxruntime) 's Twitter Profile Photo

In this blog, we will discuss how to make huge models like #BERT smaller and faster with #Intel #OpenVINO, Neural Networks Compression Framework (NNCF) and #ONNX Runtime through #Azure! 👇 cloudblogs.microsoft.com/opensource/202…

onnxruntime (@onnxruntime) 's Twitter Profile Photo

Join us live TODAY! We will be talking to Akhila Vidiyala and Devang Aggarwal on AI Show with Cassie! We will show how developers can use #huggingface #optimum #Intel to quantize models and then use #OpenVINO for #ONNXRuntime to accelerate performance. 👇 aka.ms/aishowlive

onnxruntime (@onnxruntime) 's Twitter Profile Photo

Give yourself a treat (like this adorable🐶 deserves) and read this blog on how to use #ONNX Runtime on #Android! devblogs.microsoft.com/surface-duo/on…

Give yourself a treat (like this adorable🐶 deserves) and read this blog on how to use #ONNX Runtime on #Android! 

devblogs.microsoft.com/surface-duo/on…
onnxruntime (@onnxruntime) 's Twitter Profile Photo

#ONNX Runtime saved the day with our interoperability and ability to run locally on-client and/or cloud! Our lightweight solution gave them the performance they needed with quantization & configuration tooling. Learn how they achieved this in this blog! cloudblogs.microsoft.com/opensource/202…

Szymon  (@szymon_lorenz) 's Twitter Profile Photo

Developers, don't overlook the power of Swift Package Manager! It simplifies dependency management and promotes modularity. Plus, exciting news: ONNXRuntime just added support for SPM! #iOSdev #SwiftPM #ONNXRuntime

onnxruntime (@onnxruntime) 's Twitter Profile Photo

Run PyTorch models in the browser, on mobile and desktop, with #onnxruntime, in your language and development environment of choice 🚀onnxruntime.ai/blogs/pytorch-…

onnxruntime (@onnxruntime) 's Twitter Profile Photo

ONNX Runtime & DirectML now support Phi-3 mini models cross-platforms & devices! Plus, the new ONNX Runtime Generate() API simplifies LLM integration into your apps. Try Phi-3 on your favorite hardware! Read more: onnxruntime.ai/blogs/accelera… #ONNX #DirectML #Phi3