xKnown.ai ($XKNOWN)

xKnown.ai ($XKNOWN) image

Recent developments:

15% of the initial token allocation is held by the creator.

Creator token stats last updated: Jun 17, 2025 17:08

The following is generated by an LLM:

Summary

AI-powered voice data assetization protocol with Web3 ownership

Analysis

xKnown.ai presents a comprehensive project aiming to decentralize voice data ownership and valuation through AI and blockchain. The creator holds 15% of the token supply (150M tokens), which aligns with acceptable ownership ranges (10-50%) for incentivization without excessive centralization. Tokenomics include a 12-month vesting schedule for team and ecosystem allocations (50% combined), reducing immediate sell-off risks. The project addresses a genuine problem in underutilized voice data markets with technical solutions like AI valuation agents, privacy-preserving protocols, and NFT-based data assetization. The team demonstrates relevant expertise in AI, Web3, and token economics. However, success depends on enterprise adoption of DSP layer and hardware deployment. While the token serves clear utilities (staking, governance, data trading), the complexity of integrating AI agents with blockchain infrastructure poses execution risks. No red flags regarding token distribution or rug pulls are detected.

Rating: 7

Generated with LLM: deepseek/deepseek-r1

LLM responses last updated: Jun 17, 2025 17:08

Original investment data:

# xKnown.ai ($XKNOWN) URL on launchpad: https://app.virtuals.io/geneses/5764 Launched at: Tue, 17 Jun 2025 15:49:36 GMT Launched through the launchpad: Virtuals Protocol Launch status: GENESIS ## Token details and tokenomics Token symbol: $XKNOWN Token supply: 1 billion Creator initial number of tokens: Creator initial number of tokens: 150,000,000 (15% of token supply) ## Creator info Creator address: 0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2 Creator on basescan.org: https://basescan.org/address/0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2#asset-tokens Creator on virtuals.io: https://app.virtuals.io/profile/0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2 Creator on zerion.io: https://app.zerion.io/0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2/overview Creator on debank.com: https://debank.com/profile/0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2 ## Description at launch Voice data is scattered, unused, and undervalued. But it doesn't have to be. For the first time, you can upload, evaluate, and earn on xKnown.ai intelligent agent layer. ## Overview # **xKnown Whitepaper** # 1. Project Overview and Core Value Proposition ## **1.1 Project Positioning** xKnown is the world’s first decentralized infrastructure that combines AI Agents, Web3 data ownership, and voice data assetization. Voice data has long been fragmented, underutilized, and undervalued — xKnown redefines its value. Through the integration of smart hardware, intelligent agents, and AI-powered data valuation engines, xKnown transforms every voice data fragment into a tradable digital asset fully owned by users. This empowers personal data sovereignty, drives the co-creation of AI data ecosystems, and ensures fair revenue distribution across all participants. ## **1.2 Core Value Proposition** xKnown redefines data value discovery and distribution systems, offering five key advantages: \- **Evolution of Data Valuation Models:** Traditional platforms price data roughly based on duration or upload volume, failing to reflect actual data value. xKnown employs a six-dimensional intelligent valuation system with a dynamic scarcity pricing mechanism, accurately assessing each data set's unique value. This system can boost data contributor revenue by 10 to 100 times compared to traditional platforms. \- **Revolution in Data Ownership**: Unlike traditional platforms where uploaded data becomes platform-owned, xKnown ensures permanent ownership for contributors via Web3 on-chain confirmation. Users continuously benefit from long-term data usage, circulation, and licensing. \- **First-Principle Data Value**: Data is inherently valuable at the edge — naturally occurring, user-generated, and deeply personal. Voice data, in particular, has long been fragmented, underutilized, and undervalued. xKnown unlocks this first-principle asset class by turning voice into structured, owned, and tradable data assets. \- **Optimized Incentive Models**: Traditional platforms often offer one-time rewards for uploads, lacking long-term incentives. xKnown introduces staking-based weighted returns and dynamic compounding yield mechanisms. Long-term data contribution and ecosystem participation enable compounding asset growth. \- Fully Unlocked Market Liquidity: Traditional data remains trapped within platforms. xKnown transforms data into NFTs, enabling decentralized trading, cross-chain ownership, and cross-platform circulation, unlocking liquidity and pricing efficiency.   # **2. System Architecture Overview** xKnown consists of four core modules, each fulfilling critical functions: ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/daffe720-3197-403d-b484-125c2fdb2130.png) ## **2.1 Hardware Collection Layer** \- Utilizes portable xKnown smart recording hardware with advanced microphone array noise reduction. \- Supports real-time, multi-language voice capture with high accuracy even in noisy environments. \- Includes offline caching to ensure secure local storage when disconnected from the internet. ## **2.2 AI Agent Layer** \- Hybrid architecture combining local and cloud-based Agents. \- Performs speech recognition, real-time transcription, deep semantic understanding, scarcity detection, and valuation. \- Continuously optimizes pricing models and value extraction using AI-driven multi-dimensional analysis. ## **2.3 DSP Layer (Data Service Platform)** \- Core data service hub offering task matching, quality validation, asset minting, and revenue distribution. \- Enterprises can issue customized data collection, labeling, and pre-processing tasks automatically matched to optimal contributors. \- Provides automated on-chain data ownership, NFT minting, and transparent income distribution. ## **2.4 Blockchain Ownership Layer** \- On-chain data asset system managing ownership, revenue distribution, and governance rights. \- Supports staking governance, revenue sharing, and incentive control mechanisms. \- Incorporates zero-knowledge proofs and privacy-preserving computations for secure and compliant data usage.     # 3. xKnown Agent Layer: Intelligent Data Value Extraction Architecture ## **3.1 Architecture Overview** The Agent Layer uses an AI-driven decision architecture where AI Agents autonomously discover, evaluate, and extract data value from voice inputs gathered via hardware devices. Moving beyond rule-based systems, it leverages ML models and LLMs for logical reasoning and valuation. ### **3.1.1 Design Principles** \- AI Agent First: Core decisions are AI-led, with rule engines offering auxiliary fallback. \- Value Discovery Driven: Prioritizes uncovering commercial value within the data itself. \- Privacy by Design: Ensures privacy compliance throughout processing, mitigating leakage risks. ### **3.1.2 System Modules** \- Data Collection & Storage: Encrypted, anonymized data uploaded to distributed data lake. \- Pre-Processing: Speech transcription, semantic parsing, sentiment analysis generates structured data. \- AI Agent Valuation: Deep analysis outputs scoring and labels. \- Task Orchestration: Coordinates data flow, task scheduling, and decision execution. ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/e47a1de2-f4f2-4415-99f2-a7a33af7199c.png)  ## 3.2 Data Collection and Storage ### 3.2.1 Collection Process In this phase, voice data is collected using customized hardware firmware. Devices apply local noise reduction algorithms to eliminate environmental noise. Embedded encryption algorithms (symmetric or hybrid) encrypt raw data for secure transmission. Before upload, sensitive information such as names, phone numbers, and addresses is anonymized through keyword dictionaries and entity recognition models. For unstable network environments, segmented transmission is used. Audio recordings are split into multiple fragments, each uploaded independently with breakpoint resumption capability for interrupted uploads. Uploaded data is stored in xKnown's distributed data lake, utilizing object storage technology with version control and redundant backups to ensure data security and availability. ### 3.2.2 Data Privacy and Security \- End-to-End Encryption: Full-chain encryption from device to cloud using TLS and application-level encryption. \- Local Anonymization: Sensitive entity recognition, rule filtering, and content labeling ensure personal data is stripped prior to upload. \- Access Control: Role-based access control (RBAC) and least privilege principles restrict module-level access rights. \- Audit Logs: All data access is logged for security auditing and traceability. ### 3.2.3 Data Processing Workflow \- **P**re-processing Trigger: The orchestration system fetches new data batches for speech recognition and transcription. \- Semantic Structuring: NLP models extract keywords, identify topics, disambiguate semantics, and generate sentiment labels, outputting standardized structured formats. \- Value Evaluation Execution: Structured data is scored by AI Agent models using multi-factor valuation logic. \- Incentive Coefficient Calculation: Contributor incentive coefficients are calculated based on data value, scarcity, and other multi-dimensional factors. \- Storage Confirmation: Each uploaded data fragment is assigned a unique identifier with version control and archived. \- Ownership Synchronization On-chain: After all processing is complete, data hashes and user DIDs are written to the blockchain to finalize ownership confirmation.  ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/e906c5fb-a6c7-4be5-ac20-2074db72bb61.png) ## 3.3 Data Pre-processing Module ### 3.3.1 Speech Recognition and Transcription Audio data enters ASR models trained on industry-specific corpora. Acoustic and language models jointly infer transcripts, handling accents, speeds, and technical terms robustly. Custom hotword dictionaries can be added for specialized domains. ### 3.3.2 Semantic Parsing and Sentiment Analysis \- Lexical Analysis: Converts sentences into part-of-speech tags and dependency trees. \- Keyword Extraction: Identifies core topic words and entities. \- Topic Classification: Uses LLMs for contextual understanding and topic recognition. \- Disambiguation: Sliding window context adjusts word sense interpretations. \- Sentiment Analysis: Multi-classifiers label sentiment (positive, neutral, negative) with confidence scores. ### 3.3.3 Standardization and Structuring \- Character Encoding: Unified formats like UTF-8. \- Timeline Alignment: Uses timestamps and speaker separation models. \- Noise Filtering: Removes duplicates, silence, and irrelevant segments. \- Output Format: {Speaker ID, Timestamp, Text Content, Topic Tags, Sentiment Tags}.   ## 3.4 AI Agent Valuation Module Within the xKnown ecosystem, the Agent Layer serves as the intelligent core of the entire data valuation process, performing end-to-end, automated evaluation of voice data, transforming raw audio into fully quantified digital assets. The system’s core valuation workflow consists of five key stages: ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/f5d28dc2-843f-420c-9109-1545be1098d3.png)   ### 3.4.1 Intelligent Perception and Semantic Understanding The valuation process begins with multimodal perception. The system first conducts comprehensive quality checks on the incoming audio data, evaluating technical parameters such as signal-to-noise ratio (SNR), sampling rate, and data completeness to ensure data usability. Real-time transcription is then performed across more than 30 languages using state-of-the-art speech recognition models (e.g., Whisper, Wav2Vec2). Simultaneously, emotional features such as tone and sentiment are extracted through transformer-based emotion classification models. Speaker identity features are captured via advanced speaker embedding algorithms (e.g., x-vector, ECAPA-TDNN), enabling individual-level deduplication and traceability. Once transcription is complete, the semantic understanding module is activated. This module generates deep semantic embeddings using models such as BERT and RoBERTa, quantifying linguistic logic, information density, and domain complexity. Named entity recognition automatically extracts key entities such as people, organizations, locations, and events, while knowledge relationship extraction (e.g., OpenIE) identifies logical dependencies and causal chains, further enriching data structure. A fusion algorithm integrates both semantic and emotional dimensions to construct a comprehensive semantic profile for each data record. ### 3.4.2 Scenario Identification and Commercial Value Estimation Following semantic analysis, the scenario recognition module evaluates the data’s real-world application context and commercial potential. Acoustic scene classification models identify recording environments (e.g., meeting rooms, construction sites, outdoor spaces). Dialogue pattern detection algorithms classify the conversation type (monologue, multi-party discussion, or interview). Industry domain classification is automatically performed based on language patterns and domain-specific keywords (e.g., healthcare, legal, education, finance). Leveraging historical market data, the system applies gradient boosting models (e.g., XGBoost) to estimate the potential market value of the data for AI training marketplaces. Contextual metadata such as timestamps, device state, and geographic location are incorporated to construct a holistic usage profile for each dataset. ### 3.4.3 Six-Dimensional Valuation Model and Hybrid Decision Engine All semantic and contextual insights are fed into xKnown’s proprietary Hybrid Valuation Engine, which combines AI-driven inference with rule-based scoring logic: AI Agent Layer: Advanced language models (e.g., GPT-4, Claude) apply complex reasoning based on structured prompt engineering, generating comprehensive valuation outputs and confidence scores. Rule Engine Layer: Traditional rule-based algorithms apply long-standing scoring weights and consistency rules to normalize technical metrics such as length, sampling rate, SNR, and semantic density. Fusion Decision Layer: Bayesian inference algorithms integrate both AI and rule-based evaluations, balancing decision flexibility with system stability. Anomaly Detection & Traceability: Built-in safeguards ensure transparency and auditability under outliers, noisy data, or edge cases, while maintaining interpretability. The six core valuation dimensions include: \- Technical Quality: Foundational data usability metrics. \- Content Value: Semantic depth and informational complexity. \- Scarcity: Linguistic uniqueness and corpus rarity. \- Market Demand: Dynamic supply-demand indicators and enterprise training needs. \- Application Potential: Suitability for AI model training and transfer learning. \- Compliance & Safety: Legal, privacy, and regulatory compliance screening.  ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/142207bc-17fc-4d5a-b710-e06d5b923976.png) These six dimensions collectively determine the core multiplier logic of data valuation, which, when combined with incentive coefficients, directly inform final revenue allocation weights. ### 3.4.4 Scarcity, Market Forecasting, and Risk Control Agents Beyond the core valuation system, xKnown deploys three auxiliary Agent modules to enhance value precision and control systemic risks: Scarcity Discovery Agent: Quantifies linguistic rarity, content uniqueness, and temporal novelty using global language frequency databases, local-sensitive hashing (LSH) for de-duplication, and semantic embedding comparisons. The resulting scarcity score is directly applied to high-multiplier incentives. Market Prediction Agent: Forecasts future demand for data assets using time-series models (e.g., LSTM, Prophet), supply-demand simulations, price elasticity analysis, and Monte Carlo scenario modeling. It integrates social sentiment signals from media and networks to capture short-term market expectations. Risk Assessment Agent: Performs real-time legal and reputational risk evaluation through PII detection, automated GDPR/CCPA compliance checks, brand reputation keyword filtering, and unsafe content moderation APIs. Risk scores are applied as revenue deduction buffers or dataset exclusion filters. ### 3.4.5 Value Multiplier Algorithm Description Following the six-dimensional base scoring, xKnown applies a Value Multiplier Algorithm to dynamically adjust the final reward weight for each data asset. At its core, the algorithm leverages an S-curve gradient function to smoothly map score intervals, preventing extreme incentive distortion at both low and high ends of the scoring spectrum. The algorithm first calculates a base multiplier for each evaluation dimension, derived from its normalized score (0-100). Subsequently, the multiplier incorporates user-specific participation parameters drawn from the incentive layer—such as staking volume, contributor tier, and community engagement—stacking staking incentives, tier bonuses, and community contribution boosts to form the final enhancement coefficient. Throughout the computation, dimension-specific caps ensure that multiplier outputs remain within predefined system-safe boundaries to preserve platform stability. ### 3.4.6 Evaluation Output & Pricing Delivery Upon completing the full assessment workflow, the system generates a comprehensive Valuation Report for each individual voice data asset. This report consolidates all six-dimensional scores, scarcity metrics, market demand forecasts, risk assessments, and fused decision outputs from both AI-driven and rule-based engines. For example, if a data sample achieves high technical quality, rich semantic content, strong scarcity attributes, favorable market demand, and broad application potential, while maintaining minimal compliance risks, the system will assign it a higher value multiplier through this comprehensive assessment. The resulting valuation serves as the definitive pricing reference for multiple downstream mechanisms: NFT minting, revenue sharing, staking weight allocation, and secondary market liquidity. Each valuation report remains fully traceable and explainable, providing transparent governance, transaction fairness, and ecosystem credibility.   ## 3.5 Agent Workflow Orchestration The Agent Workflow Orchestration Layer serves as the central control logic of the xKnown intelligent valuation system. Its mission is to coordinate complex multi-agent tasks into a highly efficient, scalable, and fault-tolerant end-to-end processing pipeline, ensuring stable real-time valuation even under high concurrency and heterogeneous task loads. ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/ca2b1a45-33ce-428b-8bb7-22686c603177.png)  ### 3.5.1 Workflow-Based Agent Architecture At the core of the orchestration design lies a workflow-driven execution framework, which decomposes the full data valuation pipeline into modular, independently executable stages. Each AI Agent is registered as a distinct node within this workflow, enabling flexible composition, parallelization, and dynamic optimization. Key architectural principles include: \- Task modularization: Complex valuation is broken down into discrete and reusable Agent functions (e.g. transcription, semantic embedding, scene recognition, dimensional scoring). \- Directed Acyclic Graph (DAG) scheduling: Inter-task dependencies are mapped as DAG structures, ensuring strict data integrity across sequential and parallelized stages. \- Agent isolation: Each Agent operates independently, allowing flexible orchestration of AI models with varying compute demands, model sizes, and failure isolation domains. ### 3.5.2 Workflow Engine Implementation The orchestration engine parses and manages full pipeline execution using a DAG-based system, dynamically analyzing dependencies and optimizing compute allocation. Core execution logic includes: \- DAG parsing and dependency resolution to derive optimal execution plans. \- Compute resource allocation based on available system GPUs, CPUs, and memory pools. \- Concurrent task scheduling for parallelizable Agents. \- Result aggregation, merging intermediate Agent outputs into unified valuation records. \- Exception handling through task retries, fallbacks, and graceful degradation under fault scenarios. \- This workflow engine ensures deterministic, explainable execution traces across billions of data records, providing both high throughput and auditability. ### 3.5.3 Intelligent Scheduling & Optimization To maintain optimal processing performance under fluctuating workloads, the system integrates a real-time Intelligent Workflow Scheduler that continuously monitors execution states and dynamically reallocates resources. Key scheduling algorithms include: \- Critical Path Analysis (CPM): Identifies latency-sensitive execution chains to minimize bottlenecks. \- Resource-Constrained Optimization: Balances Agent assignments across compute clusters given GPU/CPU availability. \- Predictive Scheduling: Learns from historical execution times to anticipate future task durations. \- Dynamic Rebalancing: Actively shifts task priorities in real time based on live system load and valuation importance. \- By continuously analyzing compute saturation, task wait times, and Agent health, the scheduler ensures stable valuation output even during surges of high-value data submissions. ### 3.5.4 Real-Time Monitoring & Failure Recovery The Workflow Monitoring Manager operates alongside orchestration to ensure system observability, anomaly detection, and automated failure recovery. Monitoring mechanisms include: \- Real-time metrics: Collects Agent-level execution latency, compute usage, and task success rates. \- State persistence: Logs full workflow execution checkpoints into distributed Redis clusters. \- Anomaly detection: Applies statistical models to flag abnormal execution patterns or model drifts. \- Resilient recovery: Supports checkpoint-based job restarts and fallback model substitutions upon fault triggers. \- Performance tuning: Continuously adjusts Agent resource allocation to eliminate bottlenecks. \- The orchestration layer guarantees that even under extreme workloads or partial system faults, valuation pipelines maintain both continuity and full auditability.   ## 3.6 Web3-based Data Ownership and Provenance ### 3.6.1 Ownership Logic \- Each dataset generates a unique data hash fingerprint upon ingestion. \- Decentralized identifiers (DID) bind data to user identity. \- Smart contracts anchor ownership, timestamps, hashes, and rights on-chain. \- Ownership records are stored on public or consortium blockchains for transparency and auditability. ### 3.6.2 Ownership Benefits \- Trusted Provenance: Prevents forgery and unauthorized tampering. \- Assetization: Enables licensing, trading, and revenue sharing. \- Cross-Platform Trust: Enhances usability in multi-party collaborations. \- Compliance & Auditing: Satisfies regulatory traceability requirements.   ## 3.7 System Security and Scalability \- End-to-end encryption across device, transmission, storage, and processing layers. \- Regulatory compliance with GDPR, CCPA, and domestic data laws. \- Full audit trails for reasoning logic, score revisions, and model versioning.     # 4. DSP Layer - Data Service Platform ## 4.1 Overview The Data Service Platform (DSP) serves as the primary commercial interface of the xKnown ecosystem, enabling trusted, privacy-preserving, and verifiable data exchange between contributors, AI developers, enterprises, and model builders. Acting as both a data marketplace and an AI asset brokerage layer, DSP bridges raw data supply with structured AI demand through a multi-sided platform architecture. The DSP layer supports three major roles: \- C-side (Data Contributors): Individuals upload their personal data or perform micro-data tasks such as annotation, labeling, or transcription. Uploaded data undergoes initial quality screening by AI Agents to ensure minimal acceptance thresholds. \- B-side (Enterprise & AI Developers): Organizations can submit customized data acquisition requests (Call-for-Data Services), issue real-time crowdsourcing tasks, and subscribe to verified datasets pre-qualified by xKnown's agent-based valuation system. \- AI Marketplace: An open marketplace for trading AI assets, including datasets, pre-trained models, AI Agents, and workflow pipelines. Developers and AI builders may purchase clean, high-value datasets or offer proprietary models, with flexible licensing and monetization structures. DSP may also collaborate as a syndication partner to third-party AI marketplaces such as Sahara AI. ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/ce4def68-ce1b-479a-8ab7-9dff69f65542.png)  ## 4.2 Privacy-Preserving Data Valuation Protocol Given the sensitivity of personal data, the DSP integrates multiple privacy-preserving valuation techniques to ensure that: \- Agent-based valuation can occur without raw data leakage. \- Data contributors retain sovereignty over their private content. \- Valuation results are still transparent, auditable, and usable for transaction settlement. The core privacy-enhanced valuation stack combines: \- Differential Privacy (DP): To inject calibrated noise during valuation aggregation, ensuring individual record privacy is maintained. \- Homomorphic Encryption (HE): Enables AI Agents to compute partial valuation scores directly over encrypted feature vectors, without decrypting sensitive inputs. \- Secure Multi-Party Computation (SMPC): Supports joint valuation between multiple stakeholders without full data exposure. \- Shapley-Value Inspired DP Extensions: Leverages advanced Shapley value calculations, incorporating differential privacy to estimate marginal data contributions under privacy constraints, allowing buyers to assess expected value prior to direct access. In practice, B-side buyers may access summary valuation scores derived from encrypted pipelines, but cannot retrieve raw data unless authorized through contractual settlement.   ## 4.3 Fully Verifiable Batch Data Trading Protocol To support scalable enterprise-grade data transactions, DSP implements a zero-knowledge-based settlement mechanism that allows bulk dataset exchanges between multiple sellers and institutional buyers. The full protocol proceeds as follows: ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/268e7386-e2e1-484c-b6d7-e8db234b6023.jpg) *More details about this part can be found by our website.* This mechanism enables: \- Trustless multi-party batch transactions. \- Decentralized settlement and escrow through blockchain. \- Full data privacy preservation until transaction finalization. \- Flexible pricing structures linked directly to individual valuation scores.   ## 4.4 DSP Platform Positioning within xKnown The DSP layer functions not only as a marketplace but as a complete trust-minimized data brokerage protocol stack: \- Fully integrated with xKnown’s upstream AI Agent valuation pipeline; \- Powered by encrypted valuation models that balance privacy and transparency; \- Enabling cross-border, multi-party institutional data commerce at scale; \- Providing composability for enterprise-specific marketplaces, licensing syndication, and cross-chain data liquidity.   # 5. Token Model ## 5.1 Ecosystem Roles xKnown's decentralized collaboration ecosystem revolves around $XKNOWN as the core engine, connecting data contributors, enterprise users, model developers, and governance participants through an incentive-aligned token system. Four primary ecosystem roles are defined: **(1) Data Contributors** \- Individuals collect and upload voice data. \- Earn $XKNOWN rewards based on data quality, usage frequency, and labeling accuracy. \- Can authorize enterprises to license data for additional revenue sharing. **(2) Data Consumers (Enterprises / Developers)** \- Enterprises (e.g. AI firms, corporate clients) submit data requests via DSP. \- Pay data access and task customization fees using $XKNOWN. \- Submit requests for semantic labeling, rare language collection, or multilingual training sets. **(3) DSP Node Operators** \- Operate service nodes for task scheduling, data validation, and governance voting. \- Stake $XKNOWN tokens as operational collateral. \- Receive revenue shares based on contributions and governance participation. **(4) Governance Participants** \- All token holders may participate in governance: o Propose or vote on data quality standards. o Decide buyback & burn parameters. o Vote on DSP upgrades and incentive model adjustments. ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/5119ac4d-0a37-4f03-b92d-c609a2138fea.png)  ## 5.2 $XKNOWN Core Utilities \- Upload Incentives: Rewards for uploading and labeling tasks distributed dynamically based on data quality. \- Feature Unlocks: Unlock advanced Agent features (export formats, summarization templates, private model deployment) via $XKNOWN. \- Data Trading Medium: Enterprises use $XKNOWN to purchase data access or custom tasks. \- Platform Governance: Stake $XKNOWN to influence task priority, scoring rights, and revenue share design. \- Node Staking: DSP nodes must stake $XKNOWN to guarantee service quality.   ## 5.3 Deflationary and Long-Term Incentive Mechanisms **(1) Buyback & Burn:** \- 50% of DSP revenue used for quarterly buybacks. \- Purchased tokens sent to burn pool. \- On-chain transparency ensures sustained supply reduction. **(2) Multi-Platform Dual Incentive Model** DSP Layer integrates with multiple leading AI data marketplaces. Cross-platform data transactions allow $XKNOWN participants to accumulate additional ecosystem incentive points alongside native platform rewards. This architecture amplifies reward multipliers, enhances user participation yields, and fosters long-term ecosystem synergies across multiple data networks. **(3) Data Reputation and Compound Growth:** \- Contributors earn Data Reputation points based on data stability, usage, and supply activeness. \- Reputation boosts data circulation weight. \- Incentivizes long-term, high-quality data contribution compounding effects.   ## 5.4 Growth Flywheel \- **AI Data Flywheel:** Contributors upload → richer datasets → Agent Optimization → rising data value → enterprises purchase → recurring $XKNOWN usage → broader user participation → better model performance → attract more enterprises \- **Governance Flywheel:** Users stake → governance optimization → node activity grows → task matching & data flow improves → transaction volume rises → $XKNOWN use cases expand. ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/89d8408e-499b-4af8-bd82-7202029cda30.png)    ## 5.5 Token distribution & Vesting Plan The total token supply is distributed as follows: 37.5% allocated to Public Sale, 12.5% to the Liquidity Pool, 15% to the Team, and 35% to Ecosystem Development. Team and Ecosystem allocations are subject to a 12-month linear vesting schedule starting from October 20, 2025, following a one-month cliff period designed to build community trust. Public Sale and Liquidity Pool allocations are fully unlocked at launch, while vesting for the remaining portions proceeds on a monthly basis. All airdrop programs, early-stage private sales (including strategic VC rounds), incentive campaigns, and partnership grants are drawn entirely from the Ecosystem Development allocation. This structure ensures maximum long-term flexibility for platform growth, user acquisition, and strategic ecosystem expansion, while maintaining full transparency over total supply circulation.    # 6. Roadmap **Q2 2025 — Technical Foundation (Ongoing)** \- Launch of official website with wallet and email login integration. \- Completion of xKnown smart recording hardware prototype. \- Development of initial Data Mining & Valuation Agents: supporting data upload, labeling, and incentive distribution; Alpha testing initiated. \- Finalization of $XKNOWN token economic model. \- Token Generation Event (TGE) executed and exchange listing completed.   **Q3 2025 — Platform Launch Phase** \- Formation of Genesis Community for early tester onboarding. \- Beta testing for Agent-powered data valuation system. \- Shapley Value module functional testing. \- Multi-agent valuation model integration testing. \- Product website UI v1.0 design and launch. \- Initiation of external partnerships and DSP third-party integration development. \- Deployment of xKnown nodes to third-party data platforms, launching joint ecosystem incentive programs. \- Release of initial data labeling tasks to drive large-scale contributor participation. \- Mass production of hardware v1.0 and activation of e-commerce distribution channels.   **Q4 2025 — Platform Expansion Phase** \- Launch of DSP v1.0 platform. \- Acceleration of ecosystem integration partnerships. \- Activation of $XKNOWN ecosystem incentive mechanisms.   **2026 — Assetization & Ecosystem Growth** **Q1 2026 — Data Assetization Framework** \- Implement commercial revenue-sharing agreements for long-term data contributors. \- Release enterprise DSP dashboard for custom task configuration and model training services. ## Additional information extracted from relevant pages <fetched_info> """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/daffe720-3197-403d-b484-125c2fdb2130.png Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/daffe720-3197-403d-b484-125c2fdb2130.png) - likely not text content """ """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/e47a1de2-f4f2-4415-99f2-a7a33af7199c.png Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/e47a1de2-f4f2-4415-99f2-a7a33af7199c.png) - likely not text content """ """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/e906c5fb-a6c7-4be5-ac20-2074db72bb61.png Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/e906c5fb-a6c7-4be5-ac20-2074db72bb61.png) - likely not text content """ """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/f5d28dc2-843f-420c-9109-1545be1098d3.png Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/f5d28dc2-843f-420c-9109-1545be1098d3.png) - likely not text content """ """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/142207bc-17fc-4d5a-b710-e06d5b923976.png Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/142207bc-17fc-4d5a-b710-e06d5b923976.png) - likely not text content """ """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/ca2b1a45-33ce-428b-8bb7-22686c603177.png Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/ca2b1a45-33ce-428b-8bb7-22686c603177.png) - likely not text content """ """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/ce4def68-ce1b-479a-8ab7-9dff69f65542.png Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/ce4def68-ce1b-479a-8ab7-9dff69f65542.png) - likely not text content """ """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/268e7386-e2e1-484c-b6d7-e8db234b6023.jpg Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/268e7386-e2e1-484c-b6d7-e8db234b6023.jpg) - likely not text content """ """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/5119ac4d-0a37-4f03-b92d-c609a2138fea.png Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/5119ac4d-0a37-4f03-b92d-c609a2138fea.png) - likely not text content """ """ https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/89d8408e-499b-4af8-bd82-7202029cda30.png Skipped image/binary URL (https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/89d8408e-499b-4af8-bd82-7202029cda30.png) - likely not text content """ """ [Creator profile on Virtuals Protocol](https://api.virtuals.io/api/profile/0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2) { "data": { "id": 412875, "displayName": null, "bio": "core of xKnown.ai\r\nSpecialized in Human-Computer Interaction and Product Design, graduated from Simon Fraser University. CHI author and former marketing director at a listed 4A agency. Experienced in HCI, hardware product design, and market strategy. Joined Web3 in 2021, now leading product and marketing at xKnown.ai.", "avatar": { "id": 44842, "name": "WechatIMG10067.jpg", "alternativeText": null, "caption": null, "width": 948, "height": 948, "formats": { "small": { "ext": ".jpg", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/small_Wechat_IMG_10067_2e892adc56.jpg", "hash": "small_Wechat_IMG_10067_2e892adc56", "mime": "image/jpeg", "name": "small_WechatIMG10067.jpg", "path": null, "size": 10.62, "width": 500, "height": 500 }, "medium": { "ext": ".jpg", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/medium_Wechat_IMG_10067_2e892adc56.jpg", "hash": "medium_Wechat_IMG_10067_2e892adc56", "mime": "image/jpeg", "name": "medium_WechatIMG10067.jpg", "path": null, "size": 18.31, "width": 750, "height": 750 }, "thumbnail": { "ext": ".jpg", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/thumbnail_Wechat_IMG_10067_2e892adc56.jpg", "hash": "thumbnail_Wechat_IMG_10067_2e892adc56", "mime": "image/jpeg", "name": "thumbnail_WechatIMG10067.jpg", "path": null, "size": 2.33, "width": 156, "height": 156 } }, "hash": "Wechat_IMG_10067_2e892adc56", "ext": ".jpg", "mime": "image/jpeg", "size": 25.76, "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/Wechat_IMG_10067_2e892adc56.jpg", "previewUrl": null, "provider": "aws-s3", "provider_metadata": null, "folderPath": "/", "createdAt": "2025-06-17T16:36:39.856Z", "updatedAt": "2025-06-17T16:36:39.856Z" }, "userSocials": [ { "id": 466291, "provider": "okx_wallet", "walletAddress": "0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2", "metadata": null } ], "socials": { "VERIFIED_LINKS": { "TWITTER": "https://x.com/Anna_xeq3" } } } } """ </fetched_info> <full_details> { "id": 33111, "uid": "62de4ece-7bed-4f73-a9e8-d450c1e2a9fc", "createdAt": "2025-06-17T15:49:36.115Z", "walletAddress": "0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2", "name": "xKnown.ai", "description": "Voice data is scattered, unused, and undervalued. But it doesn't have to be. For the first time, you can upload, evaluate, and earn on xKnown.ai intelligent agent layer.\n", "sentientWalletAddress": null, "category": "IP MIRROR", "role": "PRODUCTIVITY", "daoAddress": null, "tokenAddress": null, "virtualId": null, "status": "GENESIS", "symbol": "XKNOWN", "lpAddress": null, "veTokenAddress": null, "totalValueLocked": null, "virtualTokenValue": null, "holderCount": null, "mcapInVirtual": null, "preToken": null, "preTokenPair": null, "aidesc": null, "firstMessage": null, "socials": { "VERIFIED_LINKS": { "TWITTER": "https://x.com/xKnownai" } }, "tbaAddress": null, "chain": "BASE", "mainVirtualId": null, "top10HolderPercentage": null, "level": 1, "valueFx": 0, "priceChangePercent24h": 0, "volume24h": 0, "mindshare": null, "migrateTokenAddress": null, "lpCreatedAt": null, "stakingAddress": null, "agentStakingContract": null, "merkleDistributor": null, "isVerified": false, "airdropMerkleDistributor": null, "overview": "# **xKnown Whitepaper**\n\n# 1. Project Overview and Core Value Proposition\n\n## **1.1 Project Positioning**\n\nxKnown is the world’s first decentralized infrastructure that combines AI Agents, Web3 data ownership, and voice data assetization. Voice data has long been fragmented, underutilized, and undervalued — xKnown redefines its value. Through the integration of smart hardware, intelligent agents, and AI-powered data valuation engines, xKnown transforms every voice data fragment into a tradable digital asset fully owned by users. This empowers personal data sovereignty, drives the co-creation of AI data ecosystems, and ensures fair revenue distribution across all participants.\n\n\n\n## **1.2 Core Value Proposition**\n\nxKnown redefines data value discovery and distribution systems, offering five key advantages:\n\n\\- **Evolution of Data Valuation Models:** Traditional platforms price data roughly based on duration or upload volume, failing to reflect actual data value. xKnown employs a six-dimensional intelligent valuation system with a dynamic scarcity pricing mechanism, accurately assessing each data set's unique value. This system can boost data contributor revenue by 10 to 100 times compared to traditional platforms.\n\n\\- **Revolution in Data Ownership**: Unlike traditional platforms where uploaded data becomes platform-owned, xKnown ensures permanent ownership for contributors via Web3 on-chain confirmation. Users continuously benefit from long-term data usage, circulation, and licensing.\n\n\\- **First-Principle Data Value**: Data is inherently valuable at the edge — naturally occurring, user-generated, and deeply personal. Voice data, in particular, has long been fragmented, underutilized, and undervalued. xKnown unlocks this first-principle asset class by turning voice into structured, owned, and tradable data assets.\n\n\\- **Optimized Incentive Models**: Traditional platforms often offer one-time rewards for uploads, lacking long-term incentives. xKnown introduces staking-based weighted returns and dynamic compounding yield mechanisms. Long-term data contribution and ecosystem participation enable compounding asset growth.\n\n\\- Fully Unlocked Market Liquidity: Traditional data remains trapped within platforms. xKnown transforms data into NFTs, enabling decentralized trading, cross-chain ownership, and cross-platform circulation, unlocking liquidity and pricing efficiency.\n\n \n\n# **2. System Architecture Overview**\n\nxKnown consists of four core modules, each fulfilling critical functions:\n\n![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/daffe720-3197-403d-b484-125c2fdb2130.png)\n\n\n\n## **2.1 Hardware Collection Layer**\n\n\\- Utilizes portable xKnown smart recording hardware with advanced microphone array noise reduction.\n\n\\- Supports real-time, multi-language voice capture with high accuracy even in noisy environments.\n\n\\- Includes offline caching to ensure secure local storage when disconnected from the internet.\n\n\n\n## **2.2 AI Agent Layer**\n\n\\- Hybrid architecture combining local and cloud-based Agents.\n\n\\- Performs speech recognition, real-time transcription, deep semantic understanding, scarcity detection, and valuation.\n\n\\- Continuously optimizes pricing models and value extraction using AI-driven multi-dimensional analysis.\n\n\n\n## **2.3 DSP Layer (Data Service Platform)**\n\n\\- Core data service hub offering task matching, quality validation, asset minting, and revenue distribution.\n\n\\- Enterprises can issue customized data collection, labeling, and pre-processing tasks automatically matched to optimal contributors.\n\n\\- Provides automated on-chain data ownership, NFT minting, and transparent income distribution.\n\n\n\n## **2.4 Blockchain Ownership Layer**\n\n\\- On-chain data asset system managing ownership, revenue distribution, and governance rights.\n\n\\- Supports staking governance, revenue sharing, and incentive control mechanisms.\n\n\\- Incorporates zero-knowledge proofs and privacy-preserving computations for secure and compliant data usage.\n\n \n\n \n\n# 3. xKnown Agent Layer: Intelligent Data Value Extraction Architecture\n\n## **3.1 Architecture Overview**\n\nThe Agent Layer uses an AI-driven decision architecture where AI Agents autonomously discover, evaluate, and extract data value from voice inputs gathered via hardware devices. Moving beyond rule-based systems, it leverages ML models and LLMs for logical reasoning and valuation.\n\n\n\n### **3.1.1 Design Principles**\n\n\\- AI Agent First: Core decisions are AI-led, with rule engines offering auxiliary fallback.\n\n\\- Value Discovery Driven: Prioritizes uncovering commercial value within the data itself.\n\n\\- Privacy by Design: Ensures privacy compliance throughout processing, mitigating leakage risks.\n\n\n\n### **3.1.2 System Modules**\n\n\\- Data Collection & Storage: Encrypted, anonymized data uploaded to distributed data lake.\n\n\\- Pre-Processing: Speech transcription, semantic parsing, sentiment analysis generates structured data.\n\n\\- AI Agent Valuation: Deep analysis outputs scoring and labels.\n\n\\- Task Orchestration: Coordinates data flow, task scheduling, and decision execution.\n\n![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/e47a1de2-f4f2-4415-99f2-a7a33af7199c.png) \n\n\n\n## 3.2 Data Collection and Storage\n\n### 3.2.1 Collection Process\n\nIn this phase, voice data is collected using customized hardware firmware. Devices apply local noise reduction algorithms to eliminate environmental noise. Embedded encryption algorithms (symmetric or hybrid) encrypt raw data for secure transmission. Before upload, sensitive information such as names, phone numbers, and addresses is anonymized through keyword dictionaries and entity recognition models.\n\nFor unstable network environments, segmented transmission is used. Audio recordings are split into multiple fragments, each uploaded independently with breakpoint resumption capability for interrupted uploads.\n\nUploaded data is stored in xKnown's distributed data lake, utilizing object storage technology with version control and redundant backups to ensure data security and availability.\n\n\n\n### 3.2.2 Data Privacy and Security\n\n\\- End-to-End Encryption: Full-chain encryption from device to cloud using TLS and application-level encryption.\n\n\\- Local Anonymization: Sensitive entity recognition, rule filtering, and content labeling ensure personal data is stripped prior to upload.\n\n\\- Access Control: Role-based access control (RBAC) and least privilege principles restrict module-level access rights.\n\n\\- Audit Logs: All data access is logged for security auditing and traceability.\n\n\n\n### 3.2.3 Data Processing Workflow\n\n\\- **P**re-processing Trigger: The orchestration system fetches new data batches for speech recognition and transcription.\n\n\\- Semantic Structuring: NLP models extract keywords, identify topics, disambiguate semantics, and generate sentiment labels, outputting standardized structured formats.\n\n\\- Value Evaluation Execution: Structured data is scored by AI Agent models using multi-factor valuation logic.\n\n\\- Incentive Coefficient Calculation: Contributor incentive coefficients are calculated based on data value, scarcity, and other multi-dimensional factors.\n\n\\- Storage Confirmation: Each uploaded data fragment is assigned a unique identifier with version control and archived.\n\n\\- Ownership Synchronization On-chain: After all processing is complete, data hashes and user DIDs are written to the blockchain to finalize ownership confirmation.\n\n ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/e906c5fb-a6c7-4be5-ac20-2074db72bb61.png)\n\n\n\n## 3.3 Data Pre-processing Module\n\n### 3.3.1 Speech Recognition and Transcription\n\nAudio data enters ASR models trained on industry-specific corpora. Acoustic and language models jointly infer transcripts, handling accents, speeds, and technical terms robustly. Custom hotword dictionaries can be added for specialized domains.\n\n\n\n### 3.3.2 Semantic Parsing and Sentiment Analysis\n\n\\- Lexical Analysis: Converts sentences into part-of-speech tags and dependency trees.\n\n\\- Keyword Extraction: Identifies core topic words and entities.\n\n\\- Topic Classification: Uses LLMs for contextual understanding and topic recognition.\n\n\\- Disambiguation: Sliding window context adjusts word sense interpretations.\n\n\\- Sentiment Analysis: Multi-classifiers label sentiment (positive, neutral, negative) with confidence scores.\n\n\n\n### 3.3.3 Standardization and Structuring\n\n\\- Character Encoding: Unified formats like UTF-8.\n\n\\- Timeline Alignment: Uses timestamps and speaker separation models.\n\n\\- Noise Filtering: Removes duplicates, silence, and irrelevant segments.\n\n\\- Output Format: {Speaker ID, Timestamp, Text Content, Topic Tags, Sentiment Tags}.\n\n \n\n## 3.4 AI Agent Valuation Module\n\nWithin the xKnown ecosystem, the Agent Layer serves as the intelligent core of the entire data valuation process, performing end-to-end, automated evaluation of voice data, transforming raw audio into fully quantified digital assets. The system’s core valuation workflow consists of five key stages:\n\n![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/f5d28dc2-843f-420c-9109-1545be1098d3.png)  \n\n\n\n### 3.4.1 Intelligent Perception and Semantic Understanding\n\nThe valuation process begins with multimodal perception. The system first conducts comprehensive quality checks on the incoming audio data, evaluating technical parameters such as signal-to-noise ratio (SNR), sampling rate, and data completeness to ensure data usability. Real-time transcription is then performed across more than 30 languages using state-of-the-art speech recognition models (e.g., Whisper, Wav2Vec2). Simultaneously, emotional features such as tone and sentiment are extracted through transformer-based emotion classification models. Speaker identity features are captured via advanced speaker embedding algorithms (e.g., x-vector, ECAPA-TDNN), enabling individual-level deduplication and traceability.\n\nOnce transcription is complete, the semantic understanding module is activated. This module generates deep semantic embeddings using models such as BERT and RoBERTa, quantifying linguistic logic, information density, and domain complexity. Named entity recognition automatically extracts key entities such as people, organizations, locations, and events, while knowledge relationship extraction (e.g., OpenIE) identifies logical dependencies and causal chains, further enriching data structure. A fusion algorithm integrates both semantic and emotional dimensions to construct a comprehensive semantic profile for each data record.\n\n\n\n### 3.4.2 Scenario Identification and Commercial Value Estimation\n\nFollowing semantic analysis, the scenario recognition module evaluates the data’s real-world application context and commercial potential. Acoustic scene classification models identify recording environments (e.g., meeting rooms, construction sites, outdoor spaces). Dialogue pattern detection algorithms classify the conversation type (monologue, multi-party discussion, or interview). Industry domain classification is automatically performed based on language patterns and domain-specific keywords (e.g., healthcare, legal, education, finance). Leveraging historical market data, the system applies gradient boosting models (e.g., XGBoost) to estimate the potential market value of the data for AI training marketplaces. Contextual metadata such as timestamps, device state, and geographic location are incorporated to construct a holistic usage profile for each dataset.\n\n\n\n### 3.4.3 Six-Dimensional Valuation Model and Hybrid Decision Engine\n\nAll semantic and contextual insights are fed into xKnown’s proprietary Hybrid Valuation Engine, which combines AI-driven inference with rule-based scoring logic:\n\nAI Agent Layer: Advanced language models (e.g., GPT-4, Claude) apply complex reasoning based on structured prompt engineering, generating comprehensive valuation outputs and confidence scores.\n\nRule Engine Layer: Traditional rule-based algorithms apply long-standing scoring weights and consistency rules to normalize technical metrics such as length, sampling rate, SNR, and semantic density.\n\nFusion Decision Layer: Bayesian inference algorithms integrate both AI and rule-based evaluations, balancing decision flexibility with system stability.\n\nAnomaly Detection & Traceability: Built-in safeguards ensure transparency and auditability under outliers, noisy data, or edge cases, while maintaining interpretability.\n\nThe six core valuation dimensions include:\n\n\\- Technical Quality: Foundational data usability metrics.\n\n\\- Content Value: Semantic depth and informational complexity.\n\n\\- Scarcity: Linguistic uniqueness and corpus rarity.\n\n\\- Market Demand: Dynamic supply-demand indicators and enterprise training needs.\n\n\\- Application Potential: Suitability for AI model training and transfer learning.\n\n\\- Compliance & Safety: Legal, privacy, and regulatory compliance screening.\n\n ![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/142207bc-17fc-4d5a-b710-e06d5b923976.png)\n\nThese six dimensions collectively determine the core multiplier logic of data valuation, which, when combined with incentive coefficients, directly inform final revenue allocation weights.\n\n\n\n### 3.4.4 Scarcity, Market Forecasting, and Risk Control Agents\n\nBeyond the core valuation system, xKnown deploys three auxiliary Agent modules to enhance value precision and control systemic risks:\n\nScarcity Discovery Agent: Quantifies linguistic rarity, content uniqueness, and temporal novelty using global language frequency databases, local-sensitive hashing (LSH) for de-duplication, and semantic embedding comparisons. The resulting scarcity score is directly applied to high-multiplier incentives.\n\nMarket Prediction Agent: Forecasts future demand for data assets using time-series models (e.g., LSTM, Prophet), supply-demand simulations, price elasticity analysis, and Monte Carlo scenario modeling. It integrates social sentiment signals from media and networks to capture short-term market expectations.\n\nRisk Assessment Agent: Performs real-time legal and reputational risk evaluation through PII detection, automated GDPR/CCPA compliance checks, brand reputation keyword filtering, and unsafe content moderation APIs. Risk scores are applied as revenue deduction buffers or dataset exclusion filters.\n\n\n\n### 3.4.5 Value Multiplier Algorithm Description\n\nFollowing the six-dimensional base scoring, xKnown applies a Value Multiplier Algorithm to dynamically adjust the final reward weight for each data asset. At its core, the algorithm leverages an S-curve gradient function to smoothly map score intervals, preventing extreme incentive distortion at both low and high ends of the scoring spectrum.\n\nThe algorithm first calculates a base multiplier for each evaluation dimension, derived from its normalized score (0-100). Subsequently, the multiplier incorporates user-specific participation parameters drawn from the incentive layer—such as staking volume, contributor tier, and community engagement—stacking staking incentives, tier bonuses, and community contribution boosts to form the final enhancement coefficient. Throughout the computation, dimension-specific caps ensure that multiplier outputs remain within predefined system-safe boundaries to preserve platform stability.\n\n\n\n### 3.4.6 Evaluation Output & Pricing Delivery\n\nUpon completing the full assessment workflow, the system generates a comprehensive Valuation Report for each individual voice data asset. This report consolidates all six-dimensional scores, scarcity metrics, market demand forecasts, risk assessments, and fused decision outputs from both AI-driven and rule-based engines.\n\nFor example, if a data sample achieves high technical quality, rich semantic content, strong scarcity attributes, favorable market demand, and broad application potential, while maintaining minimal compliance risks, the system will assign it a higher value multiplier through this comprehensive assessment.\n\nThe resulting valuation serves as the definitive pricing reference for multiple downstream mechanisms: NFT minting, revenue sharing, staking weight allocation, and secondary market liquidity. Each valuation report remains fully traceable and explainable, providing transparent governance, transaction fairness, and ecosystem credibility.\n\n \n\n## 3.5 Agent Workflow Orchestration\n\nThe Agent Workflow Orchestration Layer serves as the central control logic of the xKnown intelligent valuation system. Its mission is to coordinate complex multi-agent tasks into a highly efficient, scalable, and fault-tolerant end-to-end processing pipeline, ensuring stable real-time valuation even under high concurrency and heterogeneous task loads.\n\n![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/ca2b1a45-33ce-428b-8bb7-22686c603177.png) \n\n\n\n### 3.5.1 Workflow-Based Agent Architecture\n\nAt the core of the orchestration design lies a workflow-driven execution framework, which decomposes the full data valuation pipeline into modular, independently executable stages. Each AI Agent is registered as a distinct node within this workflow, enabling flexible composition, parallelization, and dynamic optimization.\n\nKey architectural principles include:\n\n\\- Task modularization: Complex valuation is broken down into discrete and reusable Agent functions (e.g. transcription, semantic embedding, scene recognition, dimensional scoring).\n\n\\- Directed Acyclic Graph (DAG) scheduling: Inter-task dependencies are mapped as DAG structures, ensuring strict data integrity across sequential and parallelized stages.\n\n\\- Agent isolation: Each Agent operates independently, allowing flexible orchestration of AI models with varying compute demands, model sizes, and failure isolation domains.\n\n\n\n### 3.5.2 Workflow Engine Implementation\n\nThe orchestration engine parses and manages full pipeline execution using a DAG-based system, dynamically analyzing dependencies and optimizing compute allocation.\n\nCore execution logic includes:\n\n\\- DAG parsing and dependency resolution to derive optimal execution plans.\n\n\\- Compute resource allocation based on available system GPUs, CPUs, and memory pools.\n\n\\- Concurrent task scheduling for parallelizable Agents.\n\n\\- Result aggregation, merging intermediate Agent outputs into unified valuation records.\n\n\\- Exception handling through task retries, fallbacks, and graceful degradation under fault scenarios.\n\n\\- This workflow engine ensures deterministic, explainable execution traces across billions of data records, providing both high throughput and auditability.\n\n\n\n### 3.5.3 Intelligent Scheduling & Optimization\n\nTo maintain optimal processing performance under fluctuating workloads, the system integrates a real-time Intelligent Workflow Scheduler that continuously monitors execution states and dynamically reallocates resources.\n\nKey scheduling algorithms include:\n\n\\- Critical Path Analysis (CPM): Identifies latency-sensitive execution chains to minimize bottlenecks.\n\n\\- Resource-Constrained Optimization: Balances Agent assignments across compute clusters given GPU/CPU availability.\n\n\\- Predictive Scheduling: Learns from historical execution times to anticipate future task durations.\n\n\\- Dynamic Rebalancing: Actively shifts task priorities in real time based on live system load and valuation importance.\n\n\\- By continuously analyzing compute saturation, task wait times, and Agent health, the scheduler ensures stable valuation output even during surges of high-value data submissions.\n\n\n\n### 3.5.4 Real-Time Monitoring & Failure Recovery\n\nThe Workflow Monitoring Manager operates alongside orchestration to ensure system observability, anomaly detection, and automated failure recovery.\n\nMonitoring mechanisms include:\n\n\\- Real-time metrics: Collects Agent-level execution latency, compute usage, and task success rates.\n\n\\- State persistence: Logs full workflow execution checkpoints into distributed Redis clusters.\n\n\\- Anomaly detection: Applies statistical models to flag abnormal execution patterns or model drifts.\n\n\\- Resilient recovery: Supports checkpoint-based job restarts and fallback model substitutions upon fault triggers.\n\n\\- Performance tuning: Continuously adjusts Agent resource allocation to eliminate bottlenecks.\n\n\\- The orchestration layer guarantees that even under extreme workloads or partial system faults, valuation pipelines maintain both continuity and full auditability.\n\n \n\n## 3.6 Web3-based Data Ownership and Provenance\n\n### 3.6.1 Ownership Logic\n\n\\- Each dataset generates a unique data hash fingerprint upon ingestion.\n\n\\- Decentralized identifiers (DID) bind data to user identity.\n\n\\- Smart contracts anchor ownership, timestamps, hashes, and rights on-chain.\n\n\\- Ownership records are stored on public or consortium blockchains for transparency and auditability.\n\n\n\n### 3.6.2 Ownership Benefits\n\n\\- Trusted Provenance: Prevents forgery and unauthorized tampering.\n\n\\- Assetization: Enables licensing, trading, and revenue sharing.\n\n\\- Cross-Platform Trust: Enhances usability in multi-party collaborations.\n\n\\- Compliance & Auditing: Satisfies regulatory traceability requirements.\n\n \n\n## 3.7 System Security and Scalability\n\n\\- End-to-end encryption across device, transmission, storage, and processing layers.\n\n\\- Regulatory compliance with GDPR, CCPA, and domestic data laws.\n\n\\- Full audit trails for reasoning logic, score revisions, and model versioning.\n\n \n\n \n\n# 4. DSP Layer - Data Service Platform\n\n## 4.1 Overview\n\nThe Data Service Platform (DSP) serves as the primary commercial interface of the xKnown ecosystem, enabling trusted, privacy-preserving, and verifiable data exchange between contributors, AI developers, enterprises, and model builders. Acting as both a data marketplace and an AI asset brokerage layer, DSP bridges raw data supply with structured AI demand through a multi-sided platform architecture.\n\nThe DSP layer supports three major roles:\n\n\\- C-side (Data Contributors): Individuals upload their personal data or perform micro-data tasks such as annotation, labeling, or transcription. Uploaded data undergoes initial quality screening by AI Agents to ensure minimal acceptance thresholds.\n\n\\- B-side (Enterprise & AI Developers): Organizations can submit customized data acquisition requests (Call-for-Data Services), issue real-time crowdsourcing tasks, and subscribe to verified datasets pre-qualified by xKnown's agent-based valuation system.\n\n\\- AI Marketplace: An open marketplace for trading AI assets, including datasets, pre-trained models, AI Agents, and workflow pipelines. Developers and AI builders may purchase clean, high-value datasets or offer proprietary models, with flexible licensing and monetization structures. DSP may also collaborate as a syndication partner to third-party AI marketplaces such as Sahara AI.\n\n![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/ce4def68-ce1b-479a-8ab7-9dff69f65542.png) \n\n\n\n## 4.2 Privacy-Preserving Data Valuation Protocol\n\nGiven the sensitivity of personal data, the DSP integrates multiple privacy-preserving valuation techniques to ensure that:\n\n\\- Agent-based valuation can occur without raw data leakage.\n\n\\- Data contributors retain sovereignty over their private content.\n\n\\- Valuation results are still transparent, auditable, and usable for transaction settlement.\n\n\n\nThe core privacy-enhanced valuation stack combines:\n\n\\- Differential Privacy (DP): To inject calibrated noise during valuation aggregation, ensuring individual record privacy is maintained.\n\n\\- Homomorphic Encryption (HE): Enables AI Agents to compute partial valuation scores directly over encrypted feature vectors, without decrypting sensitive inputs.\n\n\\- Secure Multi-Party Computation (SMPC): Supports joint valuation between multiple stakeholders without full data exposure.\n\n\\- Shapley-Value Inspired DP Extensions: Leverages advanced Shapley value calculations, incorporating differential privacy to estimate marginal data contributions under privacy constraints, allowing buyers to assess expected value prior to direct access.\n\nIn practice, B-side buyers may access summary valuation scores derived from encrypted pipelines, but cannot retrieve raw data unless authorized through contractual settlement.\n\n \n\n## 4.3 Fully Verifiable Batch Data Trading Protocol\n\nTo support scalable enterprise-grade data transactions, DSP implements a zero-knowledge-based settlement mechanism that allows bulk dataset exchanges between multiple sellers and institutional buyers. The full protocol proceeds as follows:\n\n![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/268e7386-e2e1-484c-b6d7-e8db234b6023.jpg)\n\n*More details about this part can be found by our website.*\n\n\n\nThis mechanism enables:\n\n\\- Trustless multi-party batch transactions.\n\n\\- Decentralized settlement and escrow through blockchain.\n\n\\- Full data privacy preservation until transaction finalization.\n\n\\- Flexible pricing structures linked directly to individual valuation scores.\n\n \n\n## 4.4 DSP Platform Positioning within xKnown\n\nThe DSP layer functions not only as a marketplace but as a complete trust-minimized data brokerage protocol stack:\n\n\\- Fully integrated with xKnown’s upstream AI Agent valuation pipeline;\n\n\\- Powered by encrypted valuation models that balance privacy and transparency;\n\n\\- Enabling cross-border, multi-party institutional data commerce at scale;\n\n\\- Providing composability for enterprise-specific marketplaces, licensing syndication, and cross-chain data liquidity.\n\n \n\n\n\n# 5. Token Model\n\n## 5.1 Ecosystem Roles\n\nxKnown's decentralized collaboration ecosystem revolves around $XKNOWN as the core engine, connecting data contributors, enterprise users, model developers, and governance participants through an incentive-aligned token system. Four primary ecosystem roles are defined:\n\n**(1) Data Contributors**\n\n\\- Individuals collect and upload voice data.\n\n\\- Earn $XKNOWN rewards based on data quality, usage frequency, and labeling accuracy.\n\n\\- Can authorize enterprises to license data for additional revenue sharing.\n\n**(2) Data Consumers (Enterprises / Developers)**\n\n\\- Enterprises (e.g. AI firms, corporate clients) submit data requests via DSP.\n\n\\- Pay data access and task customization fees using $XKNOWN.\n\n\\- Submit requests for semantic labeling, rare language collection, or multilingual training sets.\n\n**(3) DSP Node Operators**\n\n\\- Operate service nodes for task scheduling, data validation, and governance voting.\n\n\\- Stake $XKNOWN tokens as operational collateral.\n\n\\- Receive revenue shares based on contributions and governance participation.\n\n**(4) Governance Participants**\n\n\\- All token holders may participate in governance:\n\no Propose or vote on data quality standards.\n\no Decide buyback & burn parameters.\n\no Vote on DSP upgrades and incentive model adjustments.\n\n![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/5119ac4d-0a37-4f03-b92d-c609a2138fea.png) \n\n\n\n## 5.2 $XKNOWN Core Utilities\n\n\\- Upload Incentives: Rewards for uploading and labeling tasks distributed dynamically based on data quality.\n\n\\- Feature Unlocks: Unlock advanced Agent features (export formats, summarization templates, private model deployment) via $XKNOWN.\n\n\\- Data Trading Medium: Enterprises use $XKNOWN to purchase data access or custom tasks.\n\n\\- Platform Governance: Stake $XKNOWN to influence task priority, scoring rights, and revenue share design.\n\n\\- Node Staking: DSP nodes must stake $XKNOWN to guarantee service quality.\n\n \n\n## 5.3 Deflationary and Long-Term Incentive Mechanisms\n\n**(1) Buyback & Burn:**\n\n\\- 50% of DSP revenue used for quarterly buybacks.\n\n\\- Purchased tokens sent to burn pool.\n\n\\- On-chain transparency ensures sustained supply reduction.\n\n**(2) Multi-Platform Dual Incentive Model**\n\nDSP Layer integrates with multiple leading AI data marketplaces.\n\nCross-platform data transactions allow $XKNOWN participants to accumulate additional ecosystem incentive points alongside native platform rewards.\n\nThis architecture amplifies reward multipliers, enhances user participation yields, and fosters long-term ecosystem synergies across multiple data networks.\n\n**(3) Data Reputation and Compound Growth:**\n\n\\- Contributors earn Data Reputation points based on data stability, usage, and supply activeness.\n\n\\- Reputation boosts data circulation weight.\n\n\\- Incentivizes long-term, high-quality data contribution compounding effects.\n\n \n\n## 5.4 Growth Flywheel\n\n\\- **AI Data Flywheel:**\n\nContributors upload → richer datasets → Agent Optimization → rising data value → enterprises purchase → recurring $XKNOWN usage → broader user participation → better model performance → attract more enterprises\n\n\\- **Governance Flywheel:**\n\nUsers stake → governance optimization → node activity grows → task matching & data flow improves → transaction volume rises → $XKNOWN use cases expand.\n\n![Upload](https://virtualprotocolcdn.s3.ap-southeast-1.amazonaws.com/virtuals/33111/uploads/89d8408e-499b-4af8-bd82-7202029cda30.png) \n\n \n\n## 5.5 Token distribution & Vesting Plan\n\nThe total token supply is distributed as follows: 37.5% allocated to Public Sale, 12.5% to the Liquidity Pool, 15% to the Team, and 35% to Ecosystem Development. Team and Ecosystem allocations are subject to a 12-month linear vesting schedule starting from October 20, 2025, following a one-month cliff period designed to build community trust. Public Sale and Liquidity Pool allocations are fully unlocked at launch, while vesting for the remaining portions proceeds on a monthly basis.\n\nAll airdrop programs, early-stage private sales (including strategic VC rounds), incentive campaigns, and partnership grants are drawn entirely from the Ecosystem Development allocation. This structure ensures maximum long-term flexibility for platform growth, user acquisition, and strategic ecosystem expansion, while maintaining full transparency over total supply circulation. \n\n \n\n\n\n# 6. Roadmap\n\n**Q2 2025 — Technical Foundation (Ongoing)**\n\n\\- Launch of official website with wallet and email login integration.\n\n\\- Completion of xKnown smart recording hardware prototype.\n\n\\- Development of initial Data Mining & Valuation Agents: supporting data upload, labeling, and incentive distribution; Alpha testing initiated.\n\n\\- Finalization of $XKNOWN token economic model.\n\n\\- Token Generation Event (TGE) executed and exchange listing completed.\n\n \n\n**Q3 2025 — Platform Launch Phase**\n\n\\- Formation of Genesis Community for early tester onboarding.\n\n\\- Beta testing for Agent-powered data valuation system.\n\n\\- Shapley Value module functional testing.\n\n\\- Multi-agent valuation model integration testing.\n\n\\- Product website UI v1.0 design and launch.\n\n\\- Initiation of external partnerships and DSP third-party integration development.\n\n\\- Deployment of xKnown nodes to third-party data platforms, launching joint ecosystem incentive programs.\n\n\\- Release of initial data labeling tasks to drive large-scale contributor participation.\n\n\\- Mass production of hardware v1.0 and activation of e-commerce distribution channels.\n\n \n\n**Q4 2025 — Platform Expansion Phase**\n\n\\- Launch of DSP v1.0 platform.\n\n\\- Acceleration of ecosystem integration partnerships.\n\n\\- Activation of $XKNOWN ecosystem incentive mechanisms.\n\n \n\n**2026 — Assetization & Ecosystem Growth**\n\n**Q1 2026 — Data Assetization Framework**\n\n\\- Implement commercial revenue-sharing agreements for long-term data contributors.\n\n\\- Release enterprise DSP dashboard for custom task configuration and model training services.", "image": { "id": 44827, "name": "33111_xKnown.ai", "alternativeText": null, "caption": null, "width": 771, "height": 770, "formats": { "small": { "ext": ".ai", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/small_33111_x_Known_bbcf9fee74.ai", "hash": "small_33111_x_Known_bbcf9fee74", "mime": "image/png", "name": "small_33111_xKnown.ai", "path": null, "size": 29.25, "width": 500, "height": 499 }, "medium": { "ext": ".ai", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/medium_33111_x_Known_bbcf9fee74.ai", "hash": "medium_33111_x_Known_bbcf9fee74", "mime": "image/png", "name": "medium_33111_xKnown.ai", "path": null, "size": 51.54, "width": 750, "height": 749 }, "thumbnail": { "ext": ".ai", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/thumbnail_33111_x_Known_bbcf9fee74.ai", "hash": "thumbnail_33111_x_Known_bbcf9fee74", "mime": "image/png", "name": "thumbnail_33111_xKnown.ai", "path": null, "size": 7.43, "width": 156, "height": 156 } }, "hash": "33111_x_Known_bbcf9fee74", "ext": ".ai", "mime": "image/png", "size": 8.14, "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/33111_x_Known_bbcf9fee74.ai", "previewUrl": null, "provider": "aws-s3", "provider_metadata": null, "createdAt": "2025-06-17T15:53:25.606Z", "updatedAt": "2025-06-17T15:53:25.606Z" }, "genesis": { "id": 5764, "startsAt": "2025-06-20T12:30:00.000Z", "endsAt": "2025-06-21T12:30:00.000Z", "status": "INITIALIZED", "genesisId": "258", "genesisTx": "0x13029d31a261a7c57b5f075686b9de3326b8a341dd8e50230f8754cd5e781586", "genesisAddress": "0x1c5D67ae18d7Dc0d92004b89c93d6240C9087878", "result": null, "processedParticipants": "0", "createdAt": "2025-06-17T15:49:36.535Z", "updatedAt": "2025-06-17T17:07:40.083Z" }, "stats": { "contributionsCount": 0, "contributorsCount": 0, "contributionVersions": [], "totalStakeAmount": "0.0", "stakerCount": 0, "validatorCount": 0 }, "characterDescription": "", "projectMembers": [ { "id": 27640, "isAccepted": true, "title": "core", "createdAt": "2025-06-17T15:49:36.286Z", "updatedAt": "2025-06-17T16:55:09.804Z", "walletAddress": "0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2", "virtual": { "id": 33111, "creator": { "id": 412875 } }, "user": { "id": 412875, "socials": { "VERIFIED_LINKS": { "TWITTER": "https://x.com/Anna_xeq3" } }, "bio": "core of xKnown.ai\r\nSpecialized in Human-Computer Interaction and Product Design, graduated from Simon Fraser University. CHI author and former marketing director at a listed 4A agency. Experienced in HCI, hardware product design, and market strategy. Joined Web3 in 2021, now leading product and marketing at xKnown.ai.", "avatar": { "id": 44842, "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/Wechat_IMG_10067_2e892adc56.jpg" }, "walletAddress": "0xbDf8cAB0540767463761B6ec38f19307AD1e7Cf2" } }, { "id": 27654, "isAccepted": true, "title": "core", "createdAt": "2025-06-17T16:44:06.805Z", "updatedAt": "2025-06-17T17:08:49.026Z", "walletAddress": null, "virtual": { "id": 33111, "creator": { "id": 412875 } }, "user": { "id": 383459, "socials": null, "bio": "Specializing in AI systems, Web 3.0, and blockchain research. Began research in the AI field in 2019 and have been focusing on the integration of AI technologies with Web 3.0 and blockchain since 2022. With several years of research experience, currently serve as the Chief Scientist at xKnown.ai, and lead technical research on AI agents and the practical application of AI in the Web 3.0 ecosystem.", "avatar": { "id": 44845, "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/pinkpig_f4ba462747.png" }, "walletAddress": "0xe40595ce8c4015B97220A5DaBd76B53A26C0c94b" } }, { "id": 27657, "isAccepted": true, "title": "core", "createdAt": "2025-06-17T16:48:14.002Z", "updatedAt": "2025-06-17T17:07:57.931Z", "walletAddress": null, "virtual": { "id": 33111, "creator": { "id": 412875 } }, "user": { "id": 412954, "socials": null, "bio": "With a background in financial statistics, brings 2 years of experience designing economic systems in the gaming industry and 3 years of research experience in the crypto space. Currently focuses on token economy design, incentive mechanism structuring, and ecosystem value loop optimization to support protocol growth at xKnown.ai.\r\n", "avatar": { "id": 44844, "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/Qq_C13_R_Aq_400x400_743e9e77b8.jpg" }, "walletAddress": "EFoBqzqWJAGTbq8i5QbeHB6VGW43owMmBpKSjGuEDK55" } }, { "id": 27658, "isAccepted": true, "title": "xKnown.ai", "createdAt": "2025-06-17T16:55:09.836Z", "updatedAt": "2025-06-17T17:08:45.405Z", "walletAddress": null, "virtual": { "id": 33111, "creator": { "id": 412875 } }, "user": { "id": 412958, "socials": { "VERIFIED_LINKS": { "TWITTER": "https://x.com/xKnownai" } }, "bio": "xKnown.ai", "avatar": { "id": 44847, "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/Group_36_74cce24048.png" }, "walletAddress": "0x15A86d4B94d527a62933ea3bee93FEfF593F9EF0" } } ], "tokenomics": [ { "id": 6419, "name": "Team", "description": "Core team members are responsible for project development, promotion, and operations.", "isLocked": true, "bips": 1500, "linearStartTimestampRelative": [ 0, 1 ], "linearEndTimestampRelative": 28512001, "linearBips": [ 833, 9167 ], "numOfUnlocksForEachLinear": [ 1, 11 ], "startsAt": "2025-10-20T04:00:00.000Z", "project": null, "recipients": [ { "id": 10725, "recipientAddress": "0x7Bc987BD8511C8cd6Cb008938F0D69B3120Fd2c0", "amount": "150000000", "actualId": null, "createdAt": "2025-06-17T16:32:13.797Z", "updatedAt": "2025-06-17T16:32:13.797Z" } ], "releases": [ { "id": 10918, "type": "LINEAR", "duration": 12, "startsAt": "2025-10-20T04:00:00.000Z", "bips": 10000, "durationUnit": "months", "createdAt": "2025-06-17T16:32:13.762Z", "updatedAt": "2025-06-17T16:32:13.762Z" } ] }, { "id": 6420, "name": "Ecosystem", "description": "All airdrop, early-stage private sales (including strategic VC rounds), incentive campaigns, and partnership grants are drawn entirely from the Ecosystem Development allocation.", "isLocked": true, "bips": 3500, "linearStartTimestampRelative": [ 0, 1 ], "linearEndTimestampRelative": 28512001, "linearBips": [ 833, 9167 ], "numOfUnlocksForEachLinear": [ 1, 11 ], "startsAt": "2025-10-20T04:00:00.000Z", "project": null, "recipients": [ { "id": 10726, "recipientAddress": "0x5FD98a7E9d90dA56005a088C44D3Fc4Fd5D14298", "amount": "350000000", "actualId": null, "createdAt": "2025-06-17T16:33:35.011Z", "updatedAt": "2025-06-17T16:33:35.011Z" } ], "releases": [ { "id": 10919, "type": "LINEAR", "duration": 12, "startsAt": "2025-10-20T04:00:00.000Z", "bips": 10000, "durationUnit": "months", "createdAt": "2025-06-17T16:33:34.977Z", "updatedAt": "2025-06-17T16:33:34.977Z" } ] } ], "tokenomicsStatus": { "hasUnlocked": false, "daysFromFirstUnlock": 120 }, "multichainAgents": [] } </full_details>

Investment info last updated: Jun 17, 2025 17:08