Recommended reasons:
Key Technical Features of the Fibocom AI Stack:
• Integrated high-performance modules compatible across chip platforms and operating systems
• AI toolchain supporting model compression and conversion for frameworks like TensorFlow and PyTorch
• Inference engine with heterogeneous scheduling and hardware acceleration
• AI model repository hosting a vast collection of pre-trained models
• Full-service support
Cross‑platform compatibility:
Supports Android, Linux, and Ubuntu, significantly lowering the barrier to AI development on edge devices.
Fibocom’s high‑compute AI modules and solutions support the DeepSeek‑R1 distillation model. This not only demonstrates Fibocom’s technical strength in edge AI but also greatly facilitates large‑scale commercial deployment at the edge. To broaden applications across industries, Fibocom is actively promoting high-quality models like DeepSeek across high-, mid-, and low‑power AI modules and solutions. They offer different parameter‑scale model services to further reduce technical thresholds and optimize costs, helping customers rapidly enhance on-device AI inference and enabling a wider range of IoT devices to become AI‑enabled.
Applications powered by Fibocom modules equipped with DeepSeek‑R1 include:
• Autonomous driving (path planning and dynamic obstacle avoidance)
• Robotics control (autonomous navigation and manipulation in complex environments)
• Smart manufacturing (production line optimization and resource scheduling)
• Smart healthcare (surgical assistance and intelligent diagnosis)
• AI agents (automated office workflows, intelligent monitoring, and decision support)
Recommended reasons:
Moffett AI is the leader in sparse AI computing, dedicated to providing AI computing platforms and services, with a mission to keep evolving the frontiers of AI using sparse computing.
Founded in 2018, Moffett AI is headquartered in Shenzhen China, with global offices in Shanghai, Beijing, and Silicon Valley. The founding team includes top AI scientists from Carnegie Mellon University, core members of the core high-volume chip R&D teams from the world's top semiconductor companies (Intel, Qualcomm, and Marvell, etc.), who have been deeply engaged in the industry for more than 20 years, with dozens of products and 5+ billion mainstream chip volume production experience.
The inventor of the dual sparsity algorithm, Moffett AI has the world's leading sparse computing techniques with more than 40 patents worldwide. The company creates a new generation of AI computing platform with hardware and software co-design to achieve order-of-magnitude acceleration of computing performance with low cost.
Moffett AI brings low latency, low power consumption and cost-effective computing service to the industry without compromising on accuracy. Moffett AI helps customers significantly reduce their TCO (Total Cost of Ownership) of AI computing while meeting the growing demand for AI computing, and enabling greener and more sustainable AI computing to benefit society.
Recommended reasons:
Efficient Processing for Sustained Edge AI and Graphics Performance
Imagination’s PowerVR GPU architecture is renowned for its energy efficiency and has been deployed in power-constrained devices for nearly twenty years. The E-Series’ new Burst Processors technology enhances power efficiency by a further 35% for AI workloads, games and user interfaces. This improvement is achieved by reducing pipeline depth and minimising data movement within the GPU.
Recommended reasons:
Technical Features and Specifications
Actions Technology's Edge AI Chip Technology Based on Compute-in-Memory represents a major breakthrough in edge AI chip architecture, achieving significant improvements in edge computing power efficiency. The AI acceleration engine of this technology employs mixed-mode SRAM compute-in-memory technology. The first-generation technology achieves robust computing power of 0.1TOPS@500MHz with an energy efficiency ratio of 6.4TOPS/W@INT8, which can be further enhanced through sparse matrix adaptive optimization. Edge AI chips using this technology adopt a three-core heterogeneous "CPU+DSP+NPU" architecture, where DSP and NPU are deeply integrated into a highly flexible collaborative "AI-NPU" architecture. Besides running mainstream AI model frameworks, it can also support emerging AI operators to achieve an AI chip platform with high computing power efficiency, good flexibility, strong adaptability, and reliable security.
Key Technical Advantages
Compared to traditional AI chips with von Neumann architecture, the Edge AI Chip Technology Based on Compute-in-Memory fundamentally solves the "memory wall" and "power wall" problems, improving energy efficiency by more than ten to several dozen times. The supporting "ANDT" development toolchain supports mainstream deep learning frameworks such as TensorFlow, PyTorch, and ONNX, enabling automatic algorithm allocation and optimization, significantly simplifying the development process. This is truly commercially viable technology.
Competitive Advantage Analysis
Compared to international competitors, Actions' Edge AI Chip Technology Based on Compute-in-Memory has significant advantages in power control and integration, making it particularly suitable for battery-powered edge AI devices. The full-stack proprietary technology system ensures supply chain security, while the open ecosystem strategy lowers customer development barriers, creating a virtuous cycle between technological barriers and market adoption.
Industry Impact and Prospects
This technology provides China's electronics industry with an opportunity to "overtake on a different track" in the edge AI chip field. Terminal products from well-known brands such as Hollyland equipped with this chip technology have been launched and sold in the market. With the rapid development of emerging markets such as AI glasses and smart wearables, the Edge AI Chip Technology Based on Compute-in-Memory is expected to help China occupy a leading position in the global edge AI chip market, providing core technical support for AI upgrades in full-scenario audio applications.
Malicious vote manipulation is expressly forbidden in this voting event. The organizers reserve the right to evaluate the fairness and accuracy of the voting results. AspenCore retains the authority to interpret the rules of this event.