The Web3-AI sector is attracting growing market attention as the AI narrative continues to gain momentum. Geekcartel has conducted an in-depth analysis of the technical logic, application scenarios, and representative projects in this space, offering a comprehensive overview of its landscape and future trends.
一、Web3-AI: The Technical Logic and Emerging Market Opportunities
1.1 The Integration Logic of Web3 and AI
Over the past year, AI narratives have gained significant traction in the Web3 industry, with AI projects emerging like mushrooms after a rain. While many projects claim to incorporate AI technology, some only use AI in specific parts of their products, with no substantial connection between their underlying tokenomics and AI functionality. Consequently, these projects are excluded from the Web3-AI projects discussed in this article.
This article focuses on projects that leverage blockchain to address issues of production relations while using AI to solve productivity challenges. These projects not only offer AI-based products but also rely on Web3 economic models as tools for managing production relations, thereby complementing each other. We classify these types of projects as part of the Web3-AI sector. To help readers better understand, Geekcartel will first present an in-depth analysis of the AI development process and the associated challenges. Following this, the report will examine how the integration of Web3 and AI, leveraging their combined strengths, can effectively address these challenges while fostering the development of innovative application scenarios.
1.2 The AI Development Process and Challenges
AI technology enables computers to simulate, extend, and enhance human intelligence. It empowers computers to perform various complex tasks, such as language translation, image classification, facial recognition, and autonomous driving. AI is transforming the way we live and work across various applications.
The process of developing an AI model typically involves several key steps: data collection and preprocessing, model selection and tuning, model training, and inference. For example, to develop a model that classifies images of cats and dogs, you would need to follow these steps:
- Data Collection and Preprocessing: Gather an image dataset containing cats and dogs, which can be sourced from public datasets or collected from real-world data. Then, ensure that the labels are accurate. Convert the images into a format that the model can recognize, and divide the dataset into training, validation, and test sets.
- Model Selection and Tuning: Choose an appropriate model, such as a Convolutional Neural Network (CNN), which is well-suited for image classification tasks. Tune the model’s parameters or architecture based on specific needs. Typically, the number of layers in the model can be adjusted based on the complexity of the AI task. For this simple classification example, a shallower network might be sufficient.
- Model Training: The model can be trained using GPUs, TPUs, or high-performance computing clusters. The training time is influenced by the complexity of the model and the available computational power.
- Model Inference: The output of model training is typically called model weights. The inference process involves using the trained model to predict or classify new data. During this process, the model’s classification effectiveness can be tested using a test set or new data. Performance is usually evaluated using metrics such as accuracy, precision, recall, and F1-score.
Figure 1: AI Development Process. Created by Geekcartel
As shown in the figure 1 (Using CNN for Cat and Dog Classification as an Example), after completing steps 1–3, using the trained model to perform inference on the test set yields predicted values P (probability) for cats and dogs, indicating the model’s predicted likelihood of an image being either a cat or a dog.
The trained AI model can be integrated into various applications to perform different tasks. For example, the AI model for cat and dog classification can be incorporated into a mobile app where users can upload photos of cats or dogs to obtain classification results.
However, centralized AI development processes face several issues in the following scenarios:
User Privacy: In centralized environments, AI development is often opaque. User data may be unknowingly harvested and used for AI training.
Data Source Access: Small teams or individuals may struggle to access specific domain data (e.g., medical data) due to it not being open source.
Model Selection and Tuning: It is challenging for small teams to access specialized domain model resources or to afford the high costs associated with model tuning.
Computational Resources: For individual developers and small teams, the high costs of purchasing GPUs and renting cloud computing power can pose a significant financial burden.
AI Asset Revenue: Data labeling workers often do not receive income commensurate with their efforts, and it can be difficult for AI developers to match their research results with interested buyers.
Integrating with Web3 can address the challenges in centralized AI scenarios. As a novel form of production relations, Web3 naturally complements AI, which represents a new form of productivity. This integration can simultaneously advance technology and production capabilities.
1.3 The Synergy Between Web3 and AI
The combination of Web3 with AI can enhance user sovereignty. Open-source AI collaboration platforms enable users to transition from mere consumers in the Web2 era to active participants, facilitating the creation of AI accessible to everyone. Furthermore, the convergence of the Web3 world with AI technology can spark many innovative application scenarios and interactions.
With the adoption of Web3 technology, the development and application of AI will usher in a new collaborative economic system. People’s data privacy can be protected, and the crowdsourcing model for data can foster advancements in AI models. An abundance of open-source AI resources will be available for user access and shared computational resources can be acquired at lower costs. Leveraging decentralized crowdsourcing mechanisms and open AI markets can establish a fair income distribution system, thereby motivating more people to drive the advancement of AI technology.
In Web3 scenarios, AI can positively influence several areas. For instance, AI models can be integrated into smart contracts to enhance efficiency in various application scenarios, such as market analysis, security detection, and social clustering. Generative AI can enable users to experience the role of an “artist,” for example, by creating their own NFTs using AI technology. It can also enrich GameFi by creating diverse game scenarios and engaging interactive experiences. The robust infrastructure provides a seamless development experience, allowing both AI experts and newcomers to the AI field to find suitable entry points in this ecosystem.
二、Project Landscape and Architecture
We have studied 41 projects within the Web3-AI track and categorized them into different tiers. The logic for each tier’s categorization is shown in Figure 2, including infrastructure, middle layer, and application layer, each divided into various segments. In the next section, we will provide an in-depth analysis of some representative projects.
The infrastructure layer encompasses the computational resources and technological frameworks that support the entire AI lifecycle. The middleware layer includes data management, model development, and validation inference services that connect the infrastructure to the applications. The application layer focuses on various user-facing applications and solutions.
Figure 2: Overview of the Web3-AI Ecosystem Projects. Created by Geekcartel
Infrastructure Layer:
The infrastructure layer forms the foundation of the AI lifecycle. In this paper, we classify compute resources, AI Chain, and development platforms as part of the infrastructure layer. IThese elements enable AI model training, inference, and the delivery of practical AI applications to users.
- Decentralized Compute Networks: These networks provide distributed compute resources for AI model training, ensuring efficient and economic resource use. Some projects offer decentralized computing markets where users can rent compute resources at a low cost or earn revenue by sharing their own resources. Examples include IO.NET and Hyperbolic. Furthermore, some projects have developed innovative approaches, such as Compute Labs, which has introduced a tokenized protocol. Through this protocol, users can purchase NFTs representing actual GPUs, enabling them to participate in compute leasing in various ways to generate income.
- AI Chain: Using blockchain as the foundation of the AI lifecycle facilitates seamless interaction between on-chain and off-chain AI resources, thus advancing the development of the industry ecosystem. The on-chain decentralized AI market allows for the trading of AI assets such as data, models, and agents. It offers AI development frameworks along with accompanying tools, such as Sahara AI’s marketplace. Additionally, AI Chain can drive technological advancements in various AI fields. For instance, Bittensor uses an innovative subnet incentive mechanism to foster competition among different types of AI subnets.
- Development Platforms: Some projects, such as Fetch.ai and ChainML, provide AI agent development platforms that also enable the trading of AI agents. These one-stop tools help developers create, train, and deploy AI models more conveniently. Representative projects include Nimble. These infrastructure components facilitate the widespread application of AI technology within the Web3 ecosystem.
Middle Layer:
This layer involves AI data, models, as well as inference and verification, utilizing Web3 technology to achieve higher efficiency.
- Data: The quality and quantity of data are key factors affecting the effectiveness of model training. In the Web3 world, data crowdsourcing and collaborative data processing optimizes resource utilization and reduces data costs. Users retain sovereignty over their data and can sell it while protecting their privacy, thus avoiding exploiting their information by unscrupulous businesses for excessive profits. For data seekers, these platforms offer a wide selection at very low costs. Representative projects like Grass utilize user bandwidth to scrape web data, while xData collects media information through user-friendly plugins and supports the uploading of tweet information.
Additionally, some platforms allow domain experts or ordinary users to perform data preprocessing tasks such as image tagging and data classification. These tasks may require specialized knowledge for processing financial and legal data. Users can tokenize their skills to facilitate collaborative crowdsourcing for data preprocessing. For example, the AI market of Sahara AI releases data tasks spanning various domains, covering diverse data scenarios; AIT Protocol facilitates data labeling through human-machine collaboration.
- Models: In the previously mentioned AI development process, different types of needs require matching with suitable models. Common models for image tasks include CNN and GAN. For object detection tasks, the YOLO series can be chosen. Text-related tasks commonly use RNN and Transformer. Additionally, there are specific or general large-scale models available. Tasks of varying complexity require models of differing depths, and sometimes, tuning of these models is necessary.
Some projects support users in providing various types of models or collaboratively training models through crowdsourcing. For instance, Sentient allows users to place trusted model data in the storage layer and the distribution layer for model tuning. Sahara AI offers development tools that incorporate advanced AI algorithms and computing frameworks, which are equipped with the capability for collaborative training.
- Inference and Verification: After training, models generate weight files that can be used directly for classification, prediction, or other specific tasks — a process known as inference. The inference process typically includes a verification mechanism to ensure the source of the inference model is correct and free from malicious actions. In Web3-AI, inference can be integrated within smart contracts, executing model-based predictions. Common verification methods include technologies like ZKML (Zero-Knowledge Machine Learning), OPML (Optimistic Machine Learning), and TEE (Trusted Execution Environment). A representative project, ORA Chain AI Oracle (OAO), incorporates OPML as a verifiable layer for the AI oracle. ORA’s docx also mentioned their research on ZKML and opp/ai, which combines ZKML with OPML.
Application Layer:
This layer primarily focuses on user-facing applications that combine AI with Web3, creating more interesting and innovative use cases. This section mainly covers projects in AIGC (AI-generated content), AI agents, and data analysis.
- AIGC: AIGC can extend into Web3 fields such as NFTs and games. Users can generate text, images, and audio directly through prompts (user-provided keywords), and even create custom gameplay in games based on their preferences. NFT projects like NFPrompt allow users to generate NFTs through AI and trade them in the market. Games like Sleepless enable users to shape the personality of virtual companions through dialogue to match their preferences.
- AI Agents: These are intelligent systems capable of autonomously executing tasks and making decisions. AI agents typically possess abilities such as perception, inference, learning, and action, allowing them to perform complex tasks in various environments. Common AI agents include language translation, language learning, and image-to-text conversion. In Web3 scenarios, users can generate trading bots, create meme images, perform on-chain security checks, and more. For example, Myshell is an AI agent platform that offers various types of agents, including educational learning, virtual companions, and trading agents. It also provides user-friendly agent development tools, enabling users to build their own agents without needing to code.
- Data Analysis: By integrating AI technology and related databases, data analysis, judgment, and prediction can be achieved. In Web3, this can involve analyzing market data and smart money movements to assist users in making investment decisions. Token prediction is also a unique application scenario in Web3. Representative projects like Ocean have set up long-term token prediction challenges and released various themed data analysis tasks to incentivize user participation.
三、Cutting-Edge Projects in the Web3-AI Sector
Some projects are exploring the possibilities of combining Web3 with AI. GeekCartel will guide you through the representative projects, allowing you to experience the allure of Web3-AI and understand how these projects achieve the integration of Web3 and AI, creating new business models and economic value.
Sahara: AI Blockchain Platform for the Collaborative Economy
Sahara AI is highly competitive in this sector, aiming to build a comprehensive AI blockchain platform encompassing AI data, models, agents, and compute. The platform’s underlying architecture supports the collaborative economy. By using blockchain technology and unique privacy techniques, it ensures decentralized ownership and governance of AI assets throughout the AI development lifecycle, achieving fair incentive distribution. The team has a strong background in both AI and Web3, perfectly merging these two fields, and has gained the favor of top investors, showing immense potential in the industry.
Sahara AI is not limited to just Web3; it breaks through the unequal distribution of resources and opportunities in the traditional AI field. Through decentralization, critical AI elements like compute resources, models, and data are no longer monopolized by centralized giants. Everyone has the opportunity to find their niche in this ecosystem, benefit from it, and be motivated to unleash their creativity and collective intelligence.
Figure 3: User Journey on the Sahara AI Platform. Source: Sahara Blog
As illustrated, users can use the toolkit provided by Sahara to contribute or create their own datasets, models, AI agents, and other assets. These assets can be placed on the AI marketplace to generate profits while also earning platform incentives. Consumers can trade AI assets on demand. All transaction information will be recorded on the Sahara Chain, with blockchain technology and privacy protection measures ensuring the tracking of contributions, data security, and fairness of rewards.
In Sahara’s economic system, besides the roles of developers, knowledge providers, and consumers mentioned above, users can also participate as investors by providing financial resources and assets (such as GPUs, cloud servers, and RPC nodes) to support the development and deployment of AI assets. Users may also act as Operators to maintain network stability or as Validators to uphold the security and integrity of the blockchain. Regardless of how users participate in the Sahara AI platform, they will receive rewards and income based on their contributions.
Sahara AI blockchain platform is built on a layered architecture, where on-chain and off-chain infrastructure enables users and developers to effectively contribute to and benefit from the entire AI development cycle. The architecture of the Sahara AI platform is divided into four layers:
Application Layer
The Application Layer of the Sahara AI platform serves as the primary interface for platform participants, providing natively built-in toolkits and applications to enhance user experience.
- Functional Components:
Sahara ID — Ensures secure user access to AI assets, tracks and manages user contributions and reputation.
Sahara Vault — Protects the privacy and security of AI assets from unauthorized access and potential threats.
Sahara Agent — Features persona alignment (interacting in line with user behavior), lifelong learning, multimodal perception (handling various types of data), and multi-tool utilization capabilities.
- Interactive Components:
Sahara Toolkit: Supports both technical and non-technical users in creating and deploying AI assets.
Sahara AI Marketplace: Used for publishing, monetizing, and trading AI assets, offering dynamic licensing and various monetization options.
Transaction Layer
Sahara Blockchain, a Layer 1 blockchain in the Transaction Layer of the Sahara AI Platform, is designed to manage ownership, attribution, and AI-related transactions. It upholds the sovereignty and provenance of AI assets. The Sahara Blockchain integrates innovative Sahara AI-Native Precompiles (SAP) and Sahara Blockchain Protocols (SBP) to support essential tasks throughout the entire AI lifecycle.
- SAP are built-in functions that operate at the native level of the blockchain, focusing respectively on AI training and inference processes. SAP helps invoke, record, and verify off-chain AI training and inference processes, ensuring the credibility and reliability of AI models developed within the Sahara AI platform. It also ensures that all AI inference activities are transparent, verifiable, and accountable. Additionally, SAP enables faster execution speeds, lower computational overhead, and reduced Gas costs.
- SBP implements AI-specific protocols through smart contracts, ensuring that AI assets and computation results are handled transparently and reliably. This includes functions such as AI asset registration, licensing (access control), ownership, and attribution (contribution tracking).
Data Layer
The data layer of Sahara is designed to optimize data management throughout the AI lifecycle. It acts as a crucial interface, connecting the execution layer to various data management mechanisms and seamlessly integrating on-chain and off-chain data sources.
- Data Components: This includes both on-chain and off-chain data. On-chain data encompasses metadata, attribution, commitments, and proofs related to AI assets, while datasets, AI models, and supplemental information are stored off-chain.
- Data Management: Sahara’s data management solution provides a set of security measures to ensure that data is protected both during transmission and at rest through proprietary encryption schemes. In collaboration with AI licensing SBP, it implements strict access control and verifiability while offering private domain storage to enhance users’ sensitive data security.
Execution Layer
The Execution Layer is the off-chain AI infrastructure of the Sahara AI platform that interacts seamlessly with the Transaction Layer and Data Layer to execute and manage protocols related to AI computation and functionality. Based on the execution task, it securely pulls data from the Data Layer and dynamically allocates computational resources for optimal performance. Through a suite of specialized protocols designed to facilitate efficient interactions among various abstractions, it orchestrates complex AI operations. The underlying infrastructure is designed to support high-performance AI computation.
- Infrastructure: The infrastructure of the Execution Layer of Sahara AI is designed to support high-performance AI computation characterized by speed, efficiency, resilience, and high availability. It ensures stability and reliability under high traffic and failure conditions through efficient coordination of AI computations, auto-scaling mechanisms, and fault-tolerant design.
- Abstractions: Core abstractions are the fundamental components that form the basis of AI operations on the Sahara AI platform, including abstractions of resources such as datasets, models, and computational resources. Higher-level abstractions, such as the execution interfaces behind Vaults and agents, are built on these core abstractions, enabling more advanced functionalities.
- Protocols: Abstract execution protocols are used to interact with vaults, agents, and coordination, as well as collaborate on computations. Among these, the collaborative computation protocol enables joint AI model development and deployment among multiple participants, supporting the contribution of computational resources and model aggregation. The Execution Layer also includes low computational cost technical modules(PEFT), privacy-preserving compute modules and computation fraud-proof modules.
Sahara is building an AI blockchain platform that fosters a comprehensive AI ecosystem. However, realizing this ambitious vision will inevitably encounter numerous challenges that require robust technology, resource support, and continuous optimization. If successfully implemented, it will become a pillar supporting the Web3-AI sector and has the potential to become an ideal haven for Web2-AI professionals.
Team Information:
Sahara’s team is composed of outstanding and creative members. Co-founder Sean Ren is a professor at the University of Southern California and has received honors such as Samsung AI Researcher of the Year, MIT TR35 Innovators Under 35, and Forbes 30 Under 30. Co-founder Tyler Zhou graduated from the University of California, Berkeley, and has a deep understanding of Web3. He leads a global team of talents with experience in AI and Web3.
Since its inception, Sahara‘s team has generated millions of dollars in revenue from top-tier companies, including Microsoft, Amazon, MIT, Snapchat, and Character AI. Currently, Sahara serves over 30 enterprise clients and has more than 200,000 AI trainers worldwide. Sahara’s rapid growth has allowed more participants to contribute and benefit from the shared economy model.
Funding Information:
As of August this year, Sahara Labs has successfully raised $43 million. The latest round of funding was co-led by Pantera Capital, Binance Labs, and Polychain Capital. It also received support from pioneers in the AI field, including Motherson Group, Anthropic, Nous Research, and Midjourney.
Bittensor: A New Approach with Subnet Competition Incentives
Bittensor is not an AI product, nor does it produce or provide any AI products or services. Bittensor is an economic system that offers a highly competitive incentive structure for AI producers, encouraging them to continuously optimize the quality of their AI. As an early project in the Web3-AI sector, Bittensor has garnered significant market attention since its launch. According to CoinMarketCap data, as of October 17, its market cap has exceeded $4.26 billion, with a fully diluted valuation (FDV) surpassing $12 billion.
Bittensor has built a network architecture composed of many interconnected subnets. AI producers can create subnets with customized incentives and different use cases. Different subnets handle different tasks, such as machine translation, image recognition and generation, large language models, etc. For example, Subnet 5 can create AI images similar to Midjourney’s. These subnets will be rewarded with TAO (Bittensor’s token) when they complete excellent tasks.
The incentive mechanism is a fundamental component of Bittensor. It drives the behavior of subnet miners and controls the consensus among subnet validators. Each subnet has its incentive mechanism. Subnet miners are responsible for executing tasks, while validators rate the results of subnet miners.
Figure 4: Subnet Validation Process. Source: Bittensor Documentation
As illustrated, we will demonstrate the workflow between subnet miners and subnet validators with an example:
Three subnet miners correspond to UID37, UID42, and UID27; four subnet validators correspond to UID10, UID32, UID93, and UID74.
- Each subnet validator maintains a vector of weights. Each element of the vector represents the weight assigned to a subnet miner. This weight represents how well the subnet miner is performing according to this subnet validator.
Each subnet validator ranks all the subnet miners by means of this weight vector. Each subnet validator within the subnet, acting independently, transmits its miner ranking weight vector to the blockchain. These ranking weight vectors can arrive at the blockchain at different times. Typically each subnet validator transmits an updated ranking weight vector to the blockchain every 100–200 blocks.
- The blockchain (subtensor) waits until the latest ranking weight vectors from all the subnet validators of the given subnet arrive at the blockchain. A ranking weight matrix formed from these ranking weight vectors is then provided as input to the Yuma Consensus module on-chain.
- The Yuma Consensus (YC) on-chain then uses this weight matrix, along with the amount of stake associated with the UIDs on this subnet, to calculate rewards
- The YC calculates how the reward TAO tokens should be distributed amongst the subnet validators and subnet miners in the subnet, i.e., amongst each UID in the subnet.
Subnet validators can transmit their rank weight vectors to the blockchain any time. However, for any user-created subnet, the YC for the subnet begins at every 360 blocks (=4320 seconds or 72 minutes, at 12 seconds per block) using the latest weight matrix available at the YC for the subnet. If a ranking weight vector from the subnet validator arrives after the start of a 360-block period, then this weight vector will be used in the subsequent YC start, i.e., after the current 360 blocks have elapsed. Each cycle ends with the distribution of TAO rewards.
Yuma Consensus is the core algorithm for achieving fair node distribution in Bittensor. It is a hybrid consensus mechanism that combines elements of Proof of Work (PoW) and Proof of Stake (PoS). Similar to Byzantine Fault Tolerance consensus mechanisms, if the majority of validators in the network are honest, a correct decision can eventually be reached through consensus.
The Root Network is a special type of subnet, also known as Subnet 0. By default, the 64 subnet validators with the highest stakes across all subnets are validators in the Root Network. These Root Network validators evaluate and rank the output quality of each Subnet. The evaluation results from the 64 validators are aggregated, and the final emission results are determined by the Yuma Consensus algorithm. The newly minted TAO is then allocated to each Subnet based on these final results.
Although Bittensor’s subnet competition model has improved the quality of AI products, it also faces several challenges.
- Firstly, the incentive mechanisms established by subnet owners determine the miners’ earnings, which can directly affect their motivation to work.
- Another issue is that validators determine the token allocation for each subnet, yet there is a lack of clear incentives for choosing subnets that benefit Bittensor’s long-term productivity.
- This design could lead to validators favoring subnets with which they have relationships or those offering additional benefits.
To address this problem, contributors at the Opentensor Foundation have proposed BIT001: a dynamic TAO solution that suggests determining the token allocation for all competing TAO stakers through market mechanisms.
Team Information
Co-founder Ala Shaabana is a postdoctoral researcher at the University of Waterloo with an academic background in computer science. Another co-founder, Jacob Robert Steeves, graduated from Simon Fraser University in Canada, has nearly 10 years of research experience in machine learning, and has worked as a software engineer at Google.
Funding Information
In addition to receiving funding from the OpenTensor Foundation, a nonprofit organization supporting Bittensor, the community has announced that renowned crypto VCs Pantera and Collab Currency have become TAO token holders and will provide further support for the project’s ecosystem development. Other major investors include well-known investment institutions and market makers such as Digital Currency Group, Polychain Capital, FirstMark Capital, and GSR.
Talus: On-Chain AI Agent Ecosystem Based on Move
Talus Network is an L1 blockchain built on MoveVM that is purpose-built for AI agents. These AI agents can make decisions and take actions based on predefined objectives, enabling seamless inter-chain interactions while ensuring verifiability. Users can quickly build AI agents using Talus’s development tools and integrate them into smart contracts. Talus also provides an open AI marketplace for resources such as AI models, data, and computing, allowing users to tokenize their contributions and assets.
One of Talus’s key features is its parallel execution and secure execution capabilities. With the influx of capital into the Move ecosystem and the expansion of high-quality projects, Talus’s dual highlights of secure execution based on Move and integrating AI agents with smart contracts are expected to attract significant attention in the market. Additionally, Talus supports multi-chain interactions, which can enhance AI agents’ efficiency and promote AI’s flourishing on other chains.
According to official Twitter, Talus recently introduced Nexus — the first fully on-chain autonomous AI agent framework. This gives Talus a first-mover advantage in the decentralized AI technology sector, providing significant competitive strength in the rapidly growing blockchain AI market. Nexus empowers developers to create AI-driven digital assistants on the Talus network, ensuring censorship resistance, transparency, and composability. Unlike centralized AI solutions, Nexus allows consumers to enjoy personalized intelligent services, securely manage digital assets, automate online interactions, and enhance their daily digital experience.
As the first developer toolkit for onchain agents, Nexus provides a foundation for building the next generation of consumer Crypto AI applications. It offers tools, resources, and standards to create Talus Agents that can carry out user intents and communicate with each other on the Talus chain.
Figure 5: The architecture of Talus. Source: Talus Lightpaper
As shown in figure 5, Talus is based on a modular design that enables interaction between on-chain and off-chain resources. It also possesses the flexibility to operate across multiple chains, forming a thriving ecosystem of on-chain smart agents.
Protocol is the heart of Talus. It provides the consensus, execution, and interoperability foundation on top of which one builds on-chain smart agents, utilizing off-chain resources and functionality across chain boundaries.
- Protochain Node: Protochain is the codename for this Proof-of-Stake (PoS) blockchain node, powered by Cosmos SDK and CometBFT. The Cosmos SDK features a modular design and high scalability, while CometBFT is based on the Byzantine Fault Tolerance consensus algorithm, characterized by high performance and low latency. This combination provides robust security and fault tolerance, enabling the system to continue operating normally despite partial node failures or malicious behavior.
- Sui Move and MoveVM: Utilizing Sui Move as the smart contract language, the design of the Move language inherently enhances security by eliminating critical vulnerabilities such as reentrancy attacks, lack of access control checks for object ownership, and unintended arithmetic overflow/underflow. The architecture of Move VM supports efficient parallel processing, enabling Talus to scale by handling multiple transactions simultaneously without compromising security or integrity.
IBC(The Inter-Blockchain Communication protocol):
- Interoperability: IBC facilitates seamless interoperability between different blockchains, allowing smart agents to interact with and utilize data or assets across multiple chains.
- Cross-Chain Atomicity: IBC supports atomic transactions across chains. This feature is vital for maintaining consistency and reliability in operations conducted by smart agents, particularly in financial applications or complex workflows.
- Scalability Through Sharding: By enabling smart agents to operate across multiple blockchains, IBC indirectly supports scalability through sharding. Each blockchain can be considered a shard that processes a portion of the transactions, reducing the load on any single chain. This allows smart agents to manage and execute tasks in a more distributed and scalable manner.
- Customizability and Specialization: Through IBC, different blockchains can focus on specific functions or optimizations. For example, a smart agent might use a chain optimized for fast transactions to handle payment processing, while another chain specialized in secure data storage could be used for record-keeping.
- Security and Isolation: IBC maintains security and isolation between chains, which is advantageous for smart agents handling sensitive operations or data. Since IBC ensures secure inter-chain communication and transaction verification, smart agents can operate confidently across different chains without compromising security.
Mirror Object:
Mirror objects are primarily used to verify and link AI resources to represent the off-chain world within an on-chain architecture. This includes the unique representation and proof of resources, the traceability of off-chain resources, and the representation or verifiability of ownership.
Mirror objects consist of three different types: model objects, data objects, and computation objects.
- Model Objects: Model owners can introduce their AI models into the ecosystem through a dedicated model registry. This process transforms the AI model into what is known as a Model Object: a digital representation that encapsulates the essence and capabilities of the model with ownership, management, and monetization frameworks directly built on top. The Model Object is a flexible asset that can undergo additional fine-tuning processes to sharpen its abilities or, if necessary, be entirely reshaped through extensive training to meet specific needs.
- Data Objects: The Data (or Dataset) Object acts as a digital form of a unique dataset that someone owns. This object has different capabilities, enabling it to be created, transferred, authorized, or converted to an open data source.
- Computation Objects: Buyers submit computation tasks to the owners of the objects, who then provide the computation results along with the corresponding proofs. Buyers hold a key that can be used to decrypt the commitments and verify the results.
AI Stack:
Talus offers an SDK and integration components that support the development of smart agents and their interaction with off-chain resources. This AI stack also integrates with Oracles, ensuring smart agents can leverage off-chain data for decision-making and responses.
Onchain Smart Agents:
- Talus provides a smart agent economy where these agents can autonomously operate, make decisions, execute transactions, and interact with both on-chain and off-chain resources.
- Smart agents possess autonomy, social ability, reactivity, and proactivity. Autonomy allows them to operate without human intervention; social ability enables them to interact with other agents and humans, and reactivity allows them to sense environmental changes and respond promptly (Talus supports agent responses to on-chain and off-chain events through listeners). Proactivity enables them to act based on goals, predictions, or anticipated future states.
In addition to the comprehensive development framework and infrastructure for smart agents provided by Talus, AI agents built on Talus also support various verifiable AI references (such as opML, zkML, etc.), ensuring transparency and credibility. The suite of facilities designed by Talus specifically for AI agents enables multi-chain interactions and mapping functions between on-chain and off-chain resources.
The on-chain AI agent ecosystem proposed by Talus holds significant importance for advancing the integration of AI and blockchain technologies; however, its implementation still presents certain challenges. Talus’ infrastructure provides flexibility and interoperability for the development of AI agents, but as more AI agents operate on the Talus chain, it remains to be seen whether the interoperability and efficiency of these agents can meet user needs. Talus is currently in its private testnet phase and undergoing continuous development and updates. It is anticipated that Talus will further drive the evolution of the on-chain AI agent ecosystem in the future.
Team Information:
Mike Hanono is the founder and CEO of Talus Network. He holds a Bachelor’s degree in Industrial and Systems Engineering and a Master’s in Applied Data Science from the University of Southern California. He has also participated in a Wharton School of the University of Pennsylvania program. Mike has extensive experience in data analysis, software development, and project management.
Funding Information:
In February of this year, Talus successfully closed its first $3 million funding round. Polychain Capital led the round, which included participation from Dao5, Hash3, TRGC, WAGMI Ventures, and Inception Capital. Notable angel investors included individuals from Nvidia, IBM, Blue7, Symbolic Capital, and Render Network.
ORA: The Foundation of On-Chain Verifiable AI
ORA’s product, OAO (On-chain AI Oracle), is the world’s first AI oracle utilizing opML, capable of bringing AI model inference results onto the blockchain. This means that smart contracts can interact with the OAO to implement AI functionalities on-chain. Additionally, ORA’s AI oracle can seamlessly integrate with Initial Model Offering (IMO), providing a comprehensive on-chain AI service.
ORA holds a first-mover advantage both technically and in the market. As a trustless AI oracle on Ethereum, it is expected to have a profound impact on its broad user base, with more innovative AI application scenarios anticipated in the future. Developers can now utilize models provided by ORA to implement decentralized inference within smart contracts and can build verifiable AI dApps on Ethereum, Arbitrum, Optimism, Base, Polygon, Linea, and Manta. In addition to offering verification services for AI inference, ORA also provides Initial Model Offering (IMO) to promote contributions to open-source models.
ORA’s two main products are Initial Model Offering (IMO) and On-Chain AI Oracle (OAO), which perfectly complement each other to enable on-chain AI model acquisition and AI inference verification.
- IMO incentivizes long-term open-source contributions by tokenizing the ownership of open-source AI models. Token holders will receive a portion of the revenue generated from the on-chain use of these models. Additionally, ORA provides funding to AI developers, encouraging community and open-source contributions.
- OAO introduces on-chain verifiable AI inference. ORA incorporates opML as a verifiable layer for the AI oracle. Similar to the workflow of an Optimistic Rollup, validators or any network participants can check the results during a challenge period. If a challenge is successful, incorrect results are updated on the blockchain. After the challenge period ends, the results are finalized and become immutable.
Figure 6: ORA Workflow. Source: ORA Documentation
Establishing a verifiable and decentralized oracle network is crucial to ensuring the validity of computational results on the blockchain. This process involves a proof system that guarantees the computations are reliable and authentic.
For this purpose, ORA provides three proof system frameworks:
- opML for AI Oracle (currently supported by ORA’s AI Oracle)
- zkML from keras2circom (a mature and high-performance zkML framework)
- zk+opML combining the privacy of zkML and the scalability of opML, achieving future on-chain AI solutions through opp/ai
opML:
opML (Optimistic Machine Learning), invented and developed by ORA, combines machine learning with blockchain technology. By leveraging principles similar to Optimistic Rollups, opML ensures the validity of computations in a decentralized manner. This framework allows for on-chain verification of AI computations, enhancing transparency and fostering trust in machine learning inference.
To ensure security and correctness, opML incorporates the following fraud-proof mechanism:
- Submission of Results: The service provider (submitter) performs the ML computation offchain and submits the result to the blockchain.
- Verification Period: Validators (or challengers) have a predefined period (challenge period) to verify the correctness of the submitted result.
- Dispute Resolution: If a validator detects an incorrect result, they initiate an Interactive Dispute Game. The dispute game efficiently pinpoints the exact computation step where the error occurred.
- On-Chain Verification: Only the disputed computation step is verified on-chain using the Fraud Proof Virtual Machine (FPVM), minimizing resource usage.
- Finalization: If no disputes are raised during the challenge period, or after disputes are resolved, the result is finalized on the blockchain.
ORA’s opML technology enables computations to be conducted off-chain in optimized environments, with only minimal data processed on-chain during disputes. This approach avoids the costly proof generation required for Zero-Knowledge Machine Learning (zkML), thereby reducing computational costs. It is capable of managing large-scale computations that traditional on-chain methods find challenging.
keras2circom (ZkML):
ZkML is a proof framework that leverages zero-knowledge proofs to verify machine learning inference results on-chain. Due to its privacy-preserving nature, it can protect sensitive data and model parameters during training and inference, addressing privacy concerns. Since the actual computation is performed off-chain and only the validity of the results is verified on-chain, it reduces the computational load on the blockchain.
Keras2Circom, built by ORA, is the first battle-tested high-level zkML framework. According to benchmark tests of leading zkML frameworks funded by the Ethereum Foundation ESP proposal [FY23–1290], Keras2Circom and its underlying circomlib-ml have demonstrated higher performance compared to other frameworks.
opp/ai (opML + zkML):
ORA has introduced OPP/AI (Optimistic Privacy-Preserving AI on Blockchain), which integrates Zero-Knowledge Machine Learning (zkML) for privacy with Optimistic Machine Learning (opML) for efficiency, creating a hybrid model tailored for on-chain AI. By strategically partitioning machine learning (ML) models, OPP/AI balances computational efficiency and data privacy, enabling secure and efficient AI services onchain.
opp/ai divides the ML model into submodels based on privacy requirements: zkML Submodels handle components that process sensitive data or proprietary algorithms, executed using Zero-Knowledge Proofs to ensure data and model confidentiality. opML Submodels handle components where efficiency is prioritized over privacy, executed using the optimistic approach of opML for maximum efficiency.
In summary, ORA innovatively proposes three proof frameworks: opML, zkML, and opp/ai (a combination of opML and zkML). These diversified proof frameworks enhance data privacy and computational efficiency, bringing greater flexibility and security to blockchain applications.
As the first AI oracle, ORA possesses immense potential and vast imaginative possibilities. ORA has published numerous research papers and results, showcasing its technical advantages. However, the inference process of AI models carries inherent complexities and verification costs, leading to questions about whether the speed of on-chain AI inference can satisfy user needs. With time and continuous optimization of user experience, such AI products could potentially become a powerful tool for enhancing the efficiency of on-chain Dapps.
Team Information:
Co-founder Kartin: Graduated in Computer Science from the University of Arizona, previously served as a tech lead at TikTok and a software engineer at Google.
Chief Scientist Cathie: Holds a Master’s degree in Computer Science from the University of Southern California, a Ph.D. in Psychology and Neuroscience from the University of Hong Kong, and was a ZKML researcher at the Ethereum Foundation.
Funding Information:
On June 26 of this year, ORA announced the completion of a $20 million funding round, with investment institutions including Polychain Capital, HF0, Hashkey Capital, SevenX Ventures, and Geekcartel.
Grass: The Data Layer for AI Models
Grass focuses on transforming public network data into AI datasets. The network utilizes users’ surplus bandwidth to scrape data from the internet without accessing users’ personal information. This type of network data is indispensable for the development of AI models and the operations of various other industries. Users can run nodes and earn Grass points, and setting up a node on Grass is as simple as registering and installing a Chrome extension.
Grass connects AI data demanders with data providers, creating a “win-win” situation. Its advantages include easy installation and the prospect of future airdrops, which greatly enhance user participation, thereby providing more data sources for demanders. As data providers, users do not need to perform complex setups and actions; data scraping, cleaning, and other operations are carried out without users’ awareness. Additionally, there are no special requirements for devices, lowering the participation threshold for users. The invitation mechanism also effectively encourages more users to join quickly.
Since Grass needs to perform data scraping operations to achieve tens of millions of web requests per minute, these need to be verified, requiring more throughput than any L1 can provide. In March, the Grass team announced a plan to build a rollup to support users and builders in verifying data sources. This plan involves using ZK processors to batch-process metadata for verification, with proofs for each dataset’s metadata stored on Solana’s settlement layer, generating a data ledger.
Figure 7: Grass Architecture Design. Source: Grass Blog
As shown in the figure, clients send web requests, which are routed through validators and eventually to Grass nodes. The servers of the websites respond to these web requests, allowing their data to be scraped and returned. The purpose of the ZK processor is to help record the provenance of datasets scraped on the Grass network. This means that whenever a node scrapes the web, it can receive its rewards without revealing any personal identity information. After being recorded in the data ledger, the collected data is cleaned and structured using an Edge Embedding Model and then used for AI training.
In summary, Grass allows users to contribute their surplus bandwidth to scrape web data and earn passive income while protecting personal privacy. This design not only provides economic benefits to users but also offers AI companies a decentralized way to obtain large amounts of real data.
While Grass significantly lowers the barriers to entry, facilitating increased user participation, project teams must consider that genuine user engagement and the influx of “bonus hunters” could introduce a large volume of junk information, increasing the burden of data processing. Therefore, it is crucial for project teams to establish sensible incentive mechanisms and set appropriate pricing for data to capture truly valuable information. This is an important factor for both the project teams and the users. If users feel confused or perceive unfairness in airdrop distributions, it could lead to distrust towards the project team, thereby affecting the project’s consensus and development.
Team Information:
Founder Dr. Andrej graduated from York University in Canada with a major in Computational and Applied Mathematics. CTO Chris Nguyen has many years of experience in data processing, and his data company has received multiple honors, including the IBM Cloud Embedded Excellence Award, Top 30 Enterprise Technologies, and Forbes Cloud 100 Rising Stars.
Funding Information:
Grass is the first product launched by the Wynd Network team, which closed a $3.5 million seed round led by Polychain Capital and Tribe Capital in December 2023. Bitscale, Big Brain, Advisors Anonymous, Typhon V, Mozaik, and others participated. No Limit Holdings previously led the Pre-see round, raising a total of $4.5 million.
In September of this year, Grass completed a Series A round of financing led by Hack VC, with participation from Polychain, Delphi Digital, Brevan Howard Digital, Lattice Fund, and others. The amount raised was not disclosed.
IO.NET: Decentralized Compute Resource Platform
IO.NET builds a decentralized GPU network on Solana, aggregating idle compute resources from around the globe. This allows AI engineers to access the necessary GPU compute resources at lower costs, with greater accessibility and flexibility. ML teams can build model training and inference service workflows on the distributed GPU network.
IO.NET not only provides income for users with idle compute resources but also significantly reduces the compute burden for small teams or individuals. With Solana’s high throughput and efficient execution, it has inherent advantages in network scheduling for GPUs. IO.NET has attracted significant attention and gained favour from top institutions since its launch. According to CoinMarketCap data, as of October 17, the market cap of its token has exceeded $220 million, with a fully diluted valuation (FDV) surpassing $1.47 billion.
One of IO.NET’s core technologies is the IO-SDK, a custom fork based on Ray. (Ray is an open-source framework used by OpenAI that scales machine learning and other AI and Python applications to clusters for massive computing tasks). Leveraging Ray’s native parallelism, the IO-SDK can parallelize Python functions and support integration with mainstream ML frameworks such as PyTorch and TensorFlow. Its in-memory storage allows for rapid data sharing between tasks, eliminating serialization delays.
Figure 8: IO.NET Product Components. Source: IO.NET Documentation
Product Components:
- IO Cloud: Designed for on-demand deployment and management of decentralized GPU clusters, seamlessly integrated with IO-SDK, offering a comprehensive solution for scaling AI and Python applications. It provides computing power while simplifying the deployment and management of GPU/CPU resources. Potential risks are reduced through firewalls, access control, and modular design, isolating different functions to increase security.
- IO Worker: This web application interface allows users to manage their GPU node operations. Features include monitoring computing activities, tracking temperature and power consumption, installation assistance, security measures, and revenue status.
- IO Explorer: Primarily provides users with comprehensive statistics and visualization of various aspects of the GPU cloud, allowing users to view network activities, key statistics, data points, and reward transactions in real time.
- IO ID: Users can view their account status, including wallet address activities, wallet balance, and claim earnings.
- IO Coin: Supports users in viewing the information of IO.NET tokens.
- BC8.AI: An AI image generation website supported by IO.NET, allowing users to achieve text-to-image AI generation.
IO.NET aggregates over one million GPU resources from cryptocurrency miners, projects like Filecoin and Render, and other idle computing power, allowing AI engineers or teams to customize and purchase GPU compute services according to their needs. IO.NET not only optimizes resource utilization but also reduces high computing costs, promoting broader AI and computing applications.
As a decentralized computing power platform, IO.NET should focus on user experience, the richness of computing resources, and resource scheduling and monitoring. These are crucial factors in the competitive landscape of decentralized compute. However, there have been controversies regarding resource scheduling issues, with some questioning the mismatch between resource scheduling and user orders. Although we cannot confirm the authenticity of these claims, it serves as a reminder for related projects to focus on optimizing these aspects and improving user experience. Without user support, even the most exquisite project becomes merely decorative.
Team Information:
Founder Ahmad Shadid previously worked as a quantitative systems engineer at WhalesTrader and served as a contributor and mentor at the Ethereum Foundation. Chief Technology Officer Gaurav Sharma was formerly a senior development engineer at Amazon, an architect at eBay, and worked in the engineering department at Binance.
Funding Information:
On May 1, 2023, the official announced the completion of a $10 million seed funding round.
On March 5, 2024, it announced that it had completed a $30 million Series A funding round led by Hack VC. Multicoin Capital, 6th Man Ventures, M13, Delphi Digital, Solana Labs, Aptos Labs, Foresight Ventures, Longhash, SevenX, ArkStream, Animoca Brands, Continue Capital, MH Ventures, Sandbox Games and others participated.
MyShell: An AI Agent Platform Connecting Consumers and Creators
MyShell is a decentralized AI consumer layer that connects consumers, creators, and open-source researchers. Users can utilize the AI agents provided by the platform or build their own AI agents or applications on MyShell’s development platform. Myshell offers an open marketplace where users can freely trade AI agents. In MyShell’s AIpp Store, users can find various types of AI agents, including virtual companions, trading assistants, and AIGC (AI-generated content) agents.
MyShell serves as an accessible alternative to AI chatbots like ChatGPT, providing a comprehensive AI functionality platform that lowers the barrier for users to utilize AI models and agents, enabling them to experience a full range of AI capabilities. For example, a user might want to use Claude for literature organization and writing optimization while using Midjourney to generate high-quality images. Typically, this would require the user to register multiple accounts on different platforms and pay for some services. Myshell, however, offers a one-stop service with daily free AI credits, allowing users to avoid repeated registrations and payments.
Additionally, some AI products have regional restrictions, but on the MyShell platform, users can typically access various AI services smoothly, significantly enhancing the user experience. These advantages make MyShell an ideal choice for users, providing a convenient, efficient, and seamless AI service experience.
The MyShell ecosystem is built on three core components:
Self-developed AI Models: MyShell has developed several open-source AI models, including AIGC and large language models, which users can directly use. More open-source models can be found on their official GitHub.
Open AI Development Platform: Users can easily build AI applications. The MyShell platform allows creators to utilize different models and integrate external APIs. With native development workflows and modular toolkits, creators can quickly turn their ideas into functional AI applications, accelerating innovation.
Fair Incentive Ecosystem: MyShell’s incentive mechanism encourages users to create content that meets their personal preferences. Creators receive native platform rewards when using their self-built applications and can also obtain funds from consumers.
In MyShell’s Workshop, users can build AI robots in three different modes, suitable for both professional developers and ordinary users. The Classic Mode allows users to set model parameters and instructions, which can then be integrated into social media software. The Developer Mode requires users to upload their own model files. The ShellAgent Mode enables the construction of AI robots in a no-code format.
Figure 9: Creating Your Own AI Agent with MyShell. Source: MyShell Website
MyShell combines the concepts of decentralization and AI technology, aiming to provide an open, flexible, and fairly incentivized ecosystem for consumers, creators, and researchers. By offering self-developed AI models, an open development platform, and various incentive mechanisms, it provides users with a wealth of tools and resources to realize their ideas and needs.
MyShell integrates various high-quality models, and the team is continuously developing numerous AI models to enhance user experience. However, Myshell still faces some challenges in practice. For instance, some users have reported that the support for Chinese in certain models needs improvement. Nevertheless, by reviewing MyShell’s code repository, it is evident that the team is consistently updating and optimizing, and actively listening to community feedback. With ongoing improvements, future user experiences are expected to be better.
Team Information:
Co-Founder Zengyi Qin: Specializes in speech algorithm research and holds a Ph.D. from MIT. During his bachelor’s studies at Tsinghua University, he published multiple papers in top conferences. He also has professional experience in robotics, computer vision, and reinforcement learning.
Co-Founder Ethan Sun: Graduated from Oxford University with a degree in Computer Science and has years of experience in the AR+AI field.
Funding Information:
In October 2023, MyShell raised $5.6 million in Seed round, led by INCE Capital, with participation from Hashkey Capital, Folius Ventures, SevenX Ventures, and OP Crypto.
In March 2024, Myshell raised $11 million in its latest Pre-A round, led by Dragonfly, with participation from Delphi Digital, Bankless Ventures, Maven11 Capital, Nascent, Nomad, Foresight Ventures, Animoca Ventures, OKX Ventures, and GSR. The round also received support from angel investors such as Balaji Srinivasan, Illia Polosukhin, Casey K. Caruso, and Santiago Santos.
In August of this year, Binance Labs announced its investment in MyShell through its Season 6 incubation program, with the specific amount undisclosed.
四、Challenges and Considerations
Although this field is still in its early stages, practitioners should consider several key factors that can influence the success of their projects. Here are some important aspects to take into account:
Balancing Supply and Demand for AI Resources: The balance of AI resource supply and demand is crucial for Web3-AI ecosystem projects. Users who require models, data, or compute resources may already be accustomed to acquiring AI resources through Web2 platforms. Therefore, one of the key challenges for the industry is figuring out how to attract AI resource providers to contribute to the Web3-AI ecosystem and how to bring in more users to access these resources, thereby achieving a balanced and efficient matching of AI resources.
Data Challenges: Data quality directly affects model training results. Ensuring data quality during data collection and preprocessing, and filtering out large amounts of junk data generated by users, will be significant challenges for data-centric projects. Project teams can enhance data credibility by employing scientific data quality control methods and transparently showcasing the data processing outcomes, thereby attracting more data demand-side users.
Security Issues: In the Web3 industry, it’s essential to achieve on-chain and off-chain interactions of AI assets through blockchain and privacy technologies, as well as to ensure the security of data, models, and other AI resources. Although some project teams have proposed solutions, this field is still under development. With continuous technological advancements, higher and verified security standards are expected to be achieved.
User Experience:
- Web2 users are typically accustomed to traditional operational experiences, whereas Web3 projects often involve complex smart contracts and decentralized wallets, which may present a high barrier for ordinary users. The industry should consider how to further optimize user experience and educational facilities to attract more Web2 users into the Web3-AI ecosystem.
- For Web3 users, establishing effective incentive mechanisms and a sustainable economic system is crucial for driving long-term user retention and the healthy development of the ecosystem. Additionally, we should consider how to maximize the use of AI technology to improve efficiency in the Web3 field and innovate more application scenarios and gameplay combined with AI. These are the key factors that will lead to healthy development of the ecosystem.
With the ongoing evolution of the “Internet+” development trend, we have witnessed countless innovations and transformations. Many fields have already integrated AI, and looking to the future, the era of “AI+” may flourish, fundamentally changing our way of life. The integration of Web3 and AI means that data ownership and control will return to users, making AI more transparent and trustworthy. This integration trend is expected to build a fairer and more open market environment and promote efficiency and innovation across various industries. We look forward to industry builders working together to create better AI solutions.
References
- https://ieeexplore.ieee.org/abstract/document/9451544
- https://docs.ora.io/doc/oao-onchain-ai-oracle/introduction
- https://saharalabs.ai/
- https://saharalabs.ai/blog/sahara-ai-raise-43m
- https://bittensor.com/
- https://docs.bittensor.com/yuma-consensus
- https://docs.bittensor.com/emissions#emission
- https://twitter.com/myshell_ai
- https://twitter.com/SubVortexTao
- https://foresightnews.pro/article/detail/49752
- https://www.ora.io/
- https://docs.ora.io/doc/imo/introduction
- https://github.com/ora-io/keras2circom
- https://arxiv.org/abs/2401.17555
- https://arxiv.org/abs/2402.15006
- https://x.com/OraProtocol/status/1805981228329513260
- https://x.com/getgrass_io
- https://www.getgrass.io/blog/grass-the-first-ever-layer-2-data-rollup
- https://wynd-network.gitbook.io/grass-docs/architecture/overview#edge-embedding-models
- http://IO.NET
- https://www.ray.io/
- https://www.techflowpost.com/article/detail_17611.html
- https://myshell.ai/
- https://www.chaincatcher.com/article/2118663
Acknowledgments
There is still plenty of research and engineering to do in this nascent infrastructure paradigm and areas we didn’t cover in this post. If you find any related research topics intriguing, please reach out to Chloe.
Many thanks to Severus and Jiayi for their insightful comments and feedback on this article. Finally, thanks for Jiayi’s cat’s love appearance.