Phi represents a collection of open-source AI models developed by Microsoft.
Currently, Phi stands out as the most advanced and cost-efficient small language model (SLM), delivering impressive results in multilingual tasks, reasoning, text/chat generation, coding, image processing, audio tasks, and beyond.
Phi can be deployed both in the cloud and on edge devices, enabling the creation of generative AI applications even with limited computational resources.
Here's how you can get started with these resources:
- Fork the Repository: Click
- Clone the Repository:
git clone https://github.com/microsoft/PhiCookBook.git
- Join The Microsoft AI Discord Community to connect with experts and fellow developers
法文 | 西班牙文 | 德文 | 俄文 | 阿拉伯文 | 波斯文 (法尔西文) | 乌尔都文 | 简体中文 | 繁体中文 (澳門) | 繁体中文 (香港) | 繁体中文 (台湾) | 日文 | 韩文 | 印地文 | 孟加拉文 | 马拉地文 | 尼泊尔文 | 旁遮普文 (古木基文) | 葡萄牙文 (葡萄牙) | 葡萄牙文 (巴西) | 意大利文 | 波兰文 | 土耳其文 | 希腊文 | 泰文 | 瑞典文 | 丹麦文 | 挪威文 | 芬兰文 | 荷兰文 | 希伯来文 | 越南文 | 印尼文 | 马来文 | 塔加洛文 (菲律宾文) | 斯瓦希里文 | 匈牙利文 | 捷克文 | 斯洛伐克文 | 罗马尼亚文 | 保加利亚文 | 塞尔维亚文 (西里尔文) | 克罗地亚文 | 斯洛文尼亚文
-
مقدمة
-
استنتاج Phi في بيئات مختلفة
-
استنتاج عائلة Phi
- استنتاج Phi في iOS
- استنتاج Phi في Android
- استنتاج Phi في Jetson
- استنتاج Phi في أجهزة الكمبيوتر الذكية
- استنتاج Phi باستخدام إطار عمل Apple MLX
- استنتاج Phi في الخادم المحلي
- استنتاج Phi في الخادم البعيد باستخدام أداة الذكاء الاصطناعي
- استنتاج Phi باستخدام Rust
- استنتاج Phi--Vision محليًا
- استنتاج Phi باستخدام Kaito AKS، حاويات Azure (الدعم الرسمي)
-
تقييم Phi
-
RAG with Azure AI Search
-
Phi application development samples
-
Text & Chat Applications
- Phi-4 Samples 🆕
- Phi-3 / 3.5 Samples
- Local Chatbot in the browser using Phi3, ONNX Runtime Web and WebGPU
- OpenVino Chat
- Multi Model - Interactive Phi-3-mini and OpenAI Whisper
- MLFlow - Building a wrapper and using Phi-3 with MLFlow
- Model Optimization - How to optimize Phi-3-min model for ONNX Runtime Web with Olive
- WinUI3 App with Phi-3 mini-4k-instruct-onnx
- WinUI3 Multi Model AI Powered Notes App Sample
- Fine-tune and Integrate custom Phi-3 models with Prompt flow
- Fine-tune and Integrate custom Phi-3 models with Prompt flow in Azure AI Foundry
- Evaluate the Fine-tuned Phi-3 / Phi-3.5 Model in Azure AI Foundry Focusing on Microsoft's Responsible AI Principles
- [📓] Phi-3.5-mini-instruct language prediction sample (Chinese/English)
- Phi-3.5-Instruct WebGPU RAG Chatbot
- Using Windows GPU to create Prompt flow solution with Phi-3.5-Instruct ONNX
- Using Microsoft Phi-3.5 tflite to create Android app
- Q&A .NET Example using local ONNX Phi-3 model using the Microsoft.ML.OnnxRuntime
- Console chat .NET app with Semantic Kernel and Phi-3
-
Azure AI Inference SDK Code Based Samples
-
Advanced Reasoning Samples
- Phi-4 Samples 🆕
-
Demos
-
Vision Samples
- Phi-4 Samples 🆕
- Phi-3 / 3.5 Samples
-
-
[📓]Phi-3-vision-Image text to text
- Phi-3-vision-ONNX
- [📓]Phi-3-vision CLIP Embedding
- DEMO: Phi-3 Recycling
- Phi-3-vision - Visual language assistant - with Phi3-Vision and OpenVINO
- Phi-3 Vision Nvidia NIM
- Phi-3 Vision OpenVino
- [📓]Phi-3.5 Vision multi-frame or multi-image sample
- Phi-3 Vision Local ONNX Model using the Microsoft.ML.OnnxRuntime .NET
- Menu based Phi-3 Vision Local ONNX Model using the Microsoft.ML.OnnxRuntime .NET-
نماذج الصوت
-
نماذج MOE
-
نماذج استدعاء الوظائف
-
نماذج مزج متعدد الوسائط
-
-
نماذج تحسين Phi
- سيناريوهات تحسين الأداء
- تحسين الأداء مقابل RAG
- تحسين الأداء لجعل Phi-3 خبيرًا صناعيًا
- تحسين الأداء لـPhi-3 باستخدام أدوات الذكاء الاصطناعي لـVS Code
- تحسين الأداء لـPhi-3 باستخدام خدمة Azure Machine Learning
- تحسين الأداء لـPhi-3 باستخدام Lora
- تحسين الأداء لـPhi-3 باستخدام QLora
- تحسين الأداء لـPhi-3 باستخدام Azure AI Foundry
- تحسين الأداء لـPhi-3 باستخدام Azure ML CLI/SDK
-
Hands-on Lab
-
Academic Research Papers and Publications
- Textbooks Are All You Need II: phi-1.5 technical report
- Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
- Phi-4 Technical Report
- Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs
- Optimizing Small Language Models for In-Vehicle Function-Calling
- (WhyPHI) Fine-Tuning PHI-3 for Multiple-Choice Question Answering: Methodology, Results, and Challenges
Learn how to use Microsoft Phi and build end-to-end solutions across various hardware devices. To get started with Phi, explore the models and customize them for your specific needs using the Azure AI Foundry Azure AI Model Catalog. For additional details, refer to Getting Started with Azure AI Foundry.
Playground
Each model offers a dedicated playground for testing: Azure AI Playground.
Discover how to use Microsoft Phi and create end-to-end solutions across different hardware setups. Begin by exploring and customizing the model for your scenarios through the GitHub Model Catalog. For more information, see Getting Started with GitHub Model Catalog.
Playground
Each model provides a playground for testing.
The model is also available on Hugging Face.
Playground
Explore the Hugging Chat playground.
Microsoft is dedicated to helping customers use AI solutions responsibly, sharing best practices, and fostering trust through tools like Transparency Notes and Impact Assessments. Many of these resources are accessible at https://aka.ms/RAI.
Microsoft’s approach to responsible AI is rooted in its principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
Large-scale models for natural language, image, and speech tasks—like those showcased here—can occasionally exhibit behavior that is unfair, unreliable, or offensive, potentially leading to harm. Review the Azure OpenAI service Transparency note for insights into risks and limitations.
To mitigate these risks, it’s recommended to incorporate a safety system into your architecture to detect and prevent harmful behavior. Azure AI Content Safety offers an independent layer of protection, capable of identifying harmful content generated by users or AI in applications and services. Azure AI Content Safety provides APIs for text and image analysis to detect harmful material. Within Azure AI Foundry, the Content Safety service includes sample code to explore and test content detection across modalities. Refer to the quickstart documentation for guidance on making service requests.
Performance is another key consideration for applications involving multi-modal and multi-model setups. Performance refers to the system meeting expectations, including avoiding harmful outputs. Evaluate your application’s overall performance using Performance and Quality and Risk and Safety evaluators. You can also create and assess custom evaluators. You can evaluate your AI application in your development environment using the Azure AI Evaluation SDK. By using either a test dataset or a target, the outputs of your generative AI application are quantitatively assessed with built-in evaluators or custom evaluators tailored to your needs. To begin using the Azure AI Evaluation SDK to assess your system, refer to the quickstart guide. After executing an evaluation run, you can view the results in Azure AI Foundry.
This project may include trademarks or logos associated with projects, products, or services. Any authorized use of Microsoft trademarks or logos must adhere to Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must avoid causing confusion or implying Microsoft sponsorship. The use of third-party trademarks or logos must comply with those third parties' policies.
It seems you are requesting a translation to "mo." Could you clarify what "mo" refers to? Are you asking for a translation into Māori, Mongolian, or perhaps another language? Let me know so I can assist you accurately!