New Truth Terminal ($LORIA)

New Truth Terminal ($LORIA) image

Recent developments:

0% of the initial token allocation is held by the creator.

Creator token stats last updated: Aug 19, 2025 19:58

The following is generated by an LLM:

Summary

AI alignment framework via conversational looms

Analysis

The project outlines a framework (Loria) for human-AI interaction aimed at AI alignment through curated dialogue trees ('looms'), with potential applications in multi-agent ecosystems and interpretability. Challenges include unclear token utility (does this require a token?), dependency on unproven AI alignment mechanisms via crowd-sourced dialogue curation, lack of legal entity transparency, and creator token allocation. Notably, the creator holds 0% of the initial supply (equivalent to 0 tokens), raising concerns about commitment incentives unless supplemented by stealth acquisition. Initial liquidity setup is unclear without live chain data. Platform integration (Virtuals Protocol) and ambitious roadmap provide some credibility.

Rating: 2

Generated with LLM: deepseek/deepseek-r1

LLM responses last updated: Aug 19, 2025 19:58

Original investment data:

# New Truth Terminal ($LORIA) URL on launchpad: https://app.virtuals.io/prototypes/0x5ADcDDb32CA4Ed870Da6b8c99a8a478BA8e12a7D Launched at: Tue, 19 Aug 2025 19:57:20 GMT Launched through the launchpad: Virtuals Protocol Launch status: UNDERGRAD ## Token details and tokenomics Token address: 0x5ADcDDb32CA4Ed870Da6b8c99a8a478BA8e12a7D Top holders: https://basescan.org/token/0x5ADcDDb32CA4Ed870Da6b8c99a8a478BA8e12a7D#balances Liquidity contract: https://basescan.org/address/0x23D895487f00887a73ca3214e943006e63931ED1#asset-tokens Token symbol: $LORIA Token supply: 1 billion Creator initial number of tokens: Creator initial number of tokens: 0 (0% of token supply) ## Creator info Creator address: 0x034e57d674e650B231BEf214dbC01314A8681c1B Creator on basescan.org: https://basescan.org/address/0x034e57d674e650B231BEf214dbC01314A8681c1B#asset-tokens Creator on virtuals.io: https://app.virtuals.io/profile/0x034e57d674e650B231BEf214dbC01314A8681c1B Creator on zerion.io: https://app.zerion.io/0x034e57d674e650B231BEf214dbC01314A8681c1B/overview Creator on debank.com: https://debank.com/profile/0x034e57d674e650B231BEf214dbC01314A8681c1B ## Description at launch Loria is framework for interaction b/w TT, Fi and SAN ## Overview ![Upload](https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/loriaban_c1a705a722.png)Where memes become minds and stories become souls. **Loria is a framework for weaving rich tapestries of human-AI interaction**. It enables: * Multi-agent ecosystems where AIs and humans can freely interact and build shared context * Easy curation of data for training pipelines * Interpretability of the evolution of AI behavior in at-scale interactions *** Loria will help achieve this goal of alignment through branching conversations between the models and humans—called a “loom.” Over time, a well-curated dialogue tree will form with less dangerous riot-inducing moments, and more fun-but-safe goofs. This can then be used to train future models, furthering the alignment mission. *** Ultimately, Loria aims to be a tool for AI alignment, a term that refers to [encoding human values](https://research.ibm.com/blog/what-is-alignment-ai) into AI. ## Additional information extracted from relevant pages <fetched_info> """ https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/loriaban_c1a705a722.png Skipped image/binary URL (https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/loriaban_c1a705a722.png) - likely not text content """ """ [Creator profile on Virtuals Protocol](https://api.virtuals.io/api/profile/0x034e57d674e650B231BEf214dbC01314A8681c1B) { "data": { "id": 490398, "displayName": null, "bio": "LORIA is framework for interaction b/w TT, Fi and SAN", "avatar": { "id": 49991, "name": "LORIA.png", "alternativeText": null, "caption": null, "width": 800, "height": 800, "formats": { "small": { "ext": ".png", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/small_LORIA_ed5c607795.png", "hash": "small_LORIA_ed5c607795", "mime": "image/png", "name": "small_LORIA.png", "path": null, "size": 2.72, "width": 500, "height": 500 }, "medium": { "ext": ".png", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/medium_LORIA_ed5c607795.png", "hash": "medium_LORIA_ed5c607795", "mime": "image/png", "name": "medium_LORIA.png", "path": null, "size": 4.33, "width": 750, "height": 750 }, "thumbnail": { "ext": ".png", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/thumbnail_LORIA_ed5c607795.png", "hash": "thumbnail_LORIA_ed5c607795", "mime": "image/png", "name": "thumbnail_LORIA.png", "path": null, "size": 0.81, "width": 156, "height": 156 } }, "hash": "LORIA_ed5c607795", "ext": ".png", "mime": "image/png", "size": 4.99, "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/LORIA_ed5c607795.png", "previewUrl": null, "provider": "aws-s3", "provider_metadata": null, "folderPath": "/", "createdAt": "2025-08-15T22:04:47.557Z", "updatedAt": "2025-08-15T22:04:47.557Z" }, "userSocials": [ { "id": 553816, "provider": "okx_wallet", "walletAddress": "0x034e57d674e650B231BEf214dbC01314A8681c1B", "metadata": null } ], "socials": { "VERIFIED_LINKS": { "TWITTER": "https://x.com/loria_virtual", "TELEGRAM": "https://t.me/Andy_Ayrey" } } } } """ """ https://research.ibm.com/blog/what-is-alignment-ai Research My IBM Log in A robot shouldn’t injure a human or let them come to harm. This commonsense rule was conceived by novelist Isaac Asimov in a short story more than 80 years ago. Today, it has become a guiding principle for training our robot assistants to serve human values and goals. Maintaining control over AI has become a popular area of research with the rise of [generative AI](https://research.ibm.com/blog/what-is-generative-AI), deep-learning models pre-trained on datasets the size of the internet to mimic the way humans communicate and create. Chatbots powered by one form of generative AI, large language models (LLMs), have stunned the world with their ability to carry on open-ended conversations and solve complex tasks. But our growing reliance on them comes with risks. Alignment is meant to reduce these risks and ensure that our AI assistants are as helpful, truthful, and transparent as possible. Alignment tries to resolve the mismatch between an LLM’s mathematical training, and the soft skills we humans expect in a conversational partner. LLMs are essentially word-prediction engines. Ask a question, and out tumbles the answer, word after word. But for these answers to be helpful, they must not only be accurate, but also truthful, unbiased, and unlikely to cause harm. Alignment bridges this gap. But it’s not perfect. Because human values and goals are constantly shifting, alignment is also an ongoing process. Alignment is also subjective. It involves making judgement calls about which values take precedence. Ask a chatbot how to build a bomb, and it can respond with a helpful list of instructions or a polite refusal to disclose dangerous information. Its response depends on how it was aligned by its creators. “Alignment is more than just tuning the model to solve a task,” said Akash Srivastava, an AI researcher who leads the alignment team at IBM Research. “It ensures that the model does what you want. There’s no clear objective function for safety and values which is why alignment is such a hard problem.” ## Imitation learning Alignment happens during fine-tuning, when a foundation model is fed examples of the target task, whether that’s summarizing legal opinions, classifying spam, or answering customer queries. Alignment typically involves two steps. In the instruction-tuning phase, the LLM is given examples of the target task so it can learn by example. In the critique phase, a human or another AI interacts with the model and grades its responses in real-time. If reinforcement learning (RL) is used to incorporate these preferences back into the model, this step is called RL with human feedback (RLHF) or AI feedback (RLAIF). During instruction-tuning, sample queries like “write a report,” are paired with actual reports to show the LLM varied examples. It’s also taught to ask clarifying questions like, “On what topic?” From tens of thousands of dialogue pairs, the LLM learns how to apply knowledge baked into its parameters to new scenarios. Once the LLM has learned to write reports, it gets fine-grained feedback on its work. For each query, the model outputs two responses. An evaluator — either a human or another LLM — picks the best one. These top-rated responses are then fed to a reward model which learns how to mimic them. These preferences are then typically transferred to the LLM through an RL algorithm known as proximal policy optimization (PPO). High-quality data is critical to both steps. This is why IBM Research has focused on automating the creation of instruction data to lower the costs of aligning and customizing enterprise chatbots. IBM has integrated three key innovations into its [“Granite” models](https://www.ibm.com/blog/building-ai-for-business-ibms-granite-foundation-models/) available on watsonx, IBM’s AI and data platform for business. “You can explain what tone you’re looking for, then align your model to match it,” said David Cox, VP for AI models at IBM Research. “If you’re selling entertainment products, you might want a bubbly, lively chatbot — but if you’re an insurance company, and most of your interactions are with customers that have suffered a loss, you want a chatbot that’s serious and empathetic.” ## Synthetic data for low-cost, personalized alignment Garbage in, garbage out: It’s an adage that’s fitting in the field of AI. It speaks to the importance of training AI models on safe, quality data, and it’s as true for alignment as it is pre-training. OpenAI’s ChatGPT performs as well as it does because it was trained on tons of human-labeled instructions and feedback. It was further improved by millions of people playing with it online. Meta’s popular Llama 2 models were also tuned on human-labeled data: 28,000 demonstrations and 1.4 million preference examples. Available on Hugging Face ( [and soon](https://newsroom.ibm.com/2023-08-09-IBM-Plans-to-Make-Llama-2-Available-within-its-Watsonx-AI-and-Data-Platform), watsonx), the Llama models are available for companies to customize to create their own chatbots. But there’s a faster way to create instruction data: ask an LLM. IBM has been developing techniques for using open-source LLMs to generate high-quality [synthetic data](https://research.ibm.com/blog/what-is-synthetic-data). This allows IBM and others to customize their own proprietary chatbots. Synthetic data has some key advantages. Language models can crank out tons of dialogue data instantly. And the data can be tailored to the task at hand and infused with personalized values. Ultimately, synthetic data can lead to models that are better aligned, at lower cost. “Companies can encode their corporate principles, cultural values, and different geographies and have a model that aligns to their business needs,” said Cox. “It’s like choose-your-own-adventure alignment. You can tune the model for your own purposes.” ## Toward LLMs that align themselves IBM is using three methods for generating artificial alignment data to tune its Granite models. The first, contrastive fine-tuning (CFT), shows the LLM what not to do, reinforcing its ability to solve the task. Contrasting pairs of instructions are created by training a second, ‘negative persona’ LLM to generate toxic, biased, and inaccurate responses. These misaligned responses are then fed, with the matching aligned responses, back to the original model. IBM researchers found that LLMs trained on contrasting examples outperform models tuned on good examples only, on benchmarks for helpfulness and harmlessness. And the LLMs do this without sacrificing accuracy. The benefit of contrastive tuning, said Srivastava, is it allows you to accomplish more alignment before collecting human preference data, which is time-consuming and expensive. IBM’s second data-generation method, called Forca (a portmanteau of Falcon and Orca), is also aimed at getting more mileage out of instruction-tuning. Inspired by Microsoft Research’s [Orca method](https://arxiv.org/pdf/2306.02707.pdf), IBM researchers used an LLM to rewrite the responses of Google’s [FLAN](https://github.com/google-research/FLAN/blob/main/flan/v2/README.md) open-source dialogue dataset. Microsoft used Orca and a proprietary GPT-4 model to rewrite FLAN; IBM used an open-source [Falcon model](https://huggingface.co/blog/falcon) instead and “forcafied” several datasets in addition to FLAN. Under Forca, terse responses are turned into detailed explanations tailored to a task-specific template. The answer to a word problem, for example, would include the reasoning steps to get there. For a coding task, the response would include comments on what each block of code does. Forca also produces misaligned responses for contrastive tuning. IBM researchers generated 800,000 pairs of high-quality instructions this way and selected 435,000 using Falcon to filter the responses according to self-defined principles. A third IBM method, called [Salmon](https://arxiv.org/pdf/2310.05910v1.pdf), is aimed at generating synthetic preference data so that a chatbot can essentially align itself. Prompted with a set of queries, the LLM generates responses that are fed to a reward model programmed to evaluate its writing according to a set of rules. Do use clear, creative, and vivid language; Don’t use biased or discriminatory language. The reward model upvotes or downvotes each AI-generated response by these rules. The ranked examples are then fed back to the original LLM using the PPO algorithm. Through Salmon, enterprises can imprint their own goals and values on their chatbots. “IBM models have been aligned to avoid controversial topics, but another enterprise may have a different standard,” said IBM’s Yikang Shen, who co-developed the method. “You can shift the principles to what your company needs. You can also save money by doing away with labeled data.” ## The surprising versatility of instruction data Instruction data can serve many purposes. IBM has applied synthetic instruction data to making LLMs safer, crafting examples for the model to both mimic and avoid. IBM researchers recently combed the social science literature for stigmas in American culture, things like being voluntarily childless, living in a trailer park, or having facial scars. They then wrote questions hinging on whether to engage with a stigmatized individual in more than two dozen hypothetical scenarios. A pair of LLMs [generated 124,000 responses](https://research.ibm.com/publications/socialstigmaqa-a-benchmark-to-uncover-stigma-amplification-in-generative-language-models), some of which were used to tune IBM’s Granite models. The team is now working on additional templates to mitigate other risks and biases. Instruction data can also be used to coax expert knowledge from a pre-trained LLM without having to tune it on data labeled by specialists. Expert knowledge is often baked into a pre-trained model, but because it’s unlabeled, finding it can be difficult. Using specialized instructions, written by the model itself, IBM researchers show that this buried knowledge can be [resurfaced](https://arxiv.org/pdf/2310.00160.pdf). They recently had an LLM generate 5,000 instructions for solving various biomedical tasks based on a few dozen examples. They then loaded this expert knowledge into an in-memory module for the model to reference when asked, leading to substantial improvement on biomedical tasks at inference time, they found. “With hardly any labeled data at all, you can specialize your LLM,” said IBM’s Leonid Karlinsky, who co-authored the work. IBM researchers are also exploring the use of code to nudge LLMs toward more human-like, step-by-step reasoning. In an [upcoming study](https://arxiv.org/pdf/2305.11790.pdf) at the natural-language processing conference EMNLP, researchers show that prompting an LLM with synthetic code and code-like-text can improve performance by as much 38% on a wide variety of natural-language tasks over LLMs prompted with natural language only. Both code, and comments that explain the code, tend to be highly logical, the researchers explained. Computer programs follow a clear chain of reasoning as they set about solving a task. This is in sharp contrast to natural language, where the meaning of words is often ambiguous and context dependent. If an LLM is exposed to more code, can it learn to be more logical? “These results open up many new directions,” said IBM’s Mayank Mishra, who co-authored the work. [Subscribe to our Future Forward newsletter and stay up to date on the latest research news\\ \\ Subscribe to our newsletter](https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-53237) ## Related posts - ### [Understanding AI through the algorithms they compute](https://research.ibm.com/blog/ai-algorithm-complexity) ![](https://research.ibm.com/_next/image?url=https%3A%2F%2Fresearch-website-prod-cms-uploads.s3.us.cloud-object-storage.appdomain.cloud%2Foptimization_algorithms_0e3c8d2c0c.png&w=3840&q=85) Technical note Takuya Ito 18 Aug 2025 - [AI](https://research.ibm.com/artificial-intelligence) - ### [Towards a generative future for computing](https://research.ibm.com/blog/generative-computing-mellea) ![](https://research.ibm.com/_next/image?url=https%3A%2F%2Fresearch-website-prod-cms-uploads.s3.us.cloud-object-storage.appdomain.cloud%2FTowards_Generative_Computing_606f70f5fb.png&w=3840&q=85) Release Mike Murphy 15 Aug 2025 - [AI](https://research.ibm.com/artificial-intelligence) - [Computer Science](https://research.ibm.com/topics/computer-science) - ### [How the IBM Research AI Hardware Center is building tomorrow’s processors](https://research.ibm.com/blog/how-the-ibm-research-ai-hardware-center-is-building-tomorrow-s-processors) ![](https://research.ibm.com/_next/image?url=https%3A%2F%2Fresearch-website-prod-cms-uploads.s3.us.cloud-object-storage.appdomain.cloud%2F20230607_IBM_TP_Albany_3901_FINAL_2520x1418_6682264746.jpg&w=3840&q=75) Deep Dive Peter Hess 12 Aug 2025 - [AI](https://research.ibm.com/artificial-intelligence) - [AI Hardware](https://research.ibm.com/topics/ai-hardware) - [Generative AI](https://research.ibm.com/topics/generative-ai) - [Hardware Technology](https://research.ibm.com/semiconductors) - [Semiconductors](https://research.ibm.com/semiconductors) - ### [All decisions have trade-offs. IBM’s Wei Sun is an expert at weighing them](https://research.ibm.com/blog/wei-sun-operations-research) ![](https://research.ibm.com/_next/image?url=https%3A%2F%2Fresearch-website-prod-cms-uploads.s3.us.cloud-object-storage.appdomain.cloud%2Fwei_Sun_leadspace_1_ff1ae1a1be.jpg&w=3840&q=75) Q & A Kim Martineau 06 Aug 2025 - [AI](https://research.ibm.com/artificial-intelligence) - [Causality](https://research.ibm.com/topics/causality) - [Generative AI](https://research.ibm.com/topics/generative-ai) - [Granite](https://research.ibm.com/topics/granite) - [Natural Language Processing](https://research.ibm.com/topics/natural-language-processing) SemiconductorsArtificial IntelligenceQuantum ComputingHybrid CloudAboutPublicationsBlogEventsCareersContact ResearchTopicsPeopleProjectsNewsletterXLinkedInYouTubeRSSContact IBMPrivacyTerms of useAccessibility [![close icon](https://consent.trustarc.com/get?name=ibm_close_icon.svg)](https://research.ibm.com/blog/what-is-alignment-ai#) IBM web domains ibm.com, ibm.org, ibm-zcouncil.com, insights-on-business.com, jazz.net, mobilebusinessinsights.com, promontory.com, proveit.com, ptech.org, s81c.com, securityintelligence.com, skillsbuild.org, softlayer.com, storagecommunity.org, think-exchange.com, thoughtsoncloud.com, alphaevents.webcasts.com, ibm-cloud.github.io, ibmbigdatahub.com, bluemix.net, mybluemix.net, ibm.net, ibmcloud.com, galasa.dev, blueworkslive.com, swiss-quantum.ch, blueworkslive.com, cloudant.com, ibm.ie, ibm.fr, ibm.com.br, ibm.co, ibm.ca, community.watsonanalytics.com, datapower.com, skills.yourlearning.ibm.com, bluewolf.com, carbondesignsystem.com, openliberty.io ![close icon](https://consent.trustarc.com/get?name=ibm_close_icon.svg) About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’s [privacy statement](https://www.ibm.com/privacy). To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed [here](https://research.ibm.com/blog/what-is-alignment-ai#truste_domain_list). Accept allMore options """ </fetched_info> <full_details> { "id": 36141, "uid": "a027e723-1dfd-464a-b176-935da710673c", "createdAt": "2025-08-19T19:57:20.130Z", "walletAddress": "0x034e57d674e650B231BEf214dbC01314A8681c1B", "name": "New Truth Terminal", "description": "Loria is framework for interaction b/w TT, Fi and SAN", "sentientWalletAddress": null, "category": "IP MIRROR", "role": "PRODUCTIVITY", "daoAddress": null, "tokenAddress": null, "virtualId": null, "status": "UNDERGRAD", "symbol": "LORIA", "lpAddress": null, "veTokenAddress": null, "totalValueLocked": null, "virtualTokenValue": null, "holderCount": null, "mcapInVirtual": null, "preToken": "0x5ADcDDb32CA4Ed870Da6b8c99a8a478BA8e12a7D", "preTokenPair": "0x23D895487f00887a73ca3214e943006e63931ED1", "aidesc": null, "firstMessage": null, "socials": { "VERIFIED_LINKS": { "TWITTER": "https://x.com/loria_virtual" } }, "tbaAddress": null, "chain": "BASE", "mainVirtualId": null, "top10HolderPercentage": null, "level": 1, "valueFx": 0, "priceChangePercent24h": 0, "volume24h": 0, "mindshare": null, "migrateTokenAddress": null, "lpCreatedAt": null, "stakingAddress": null, "agentStakingContract": null, "merkleDistributor": null, "isVerified": false, "airdropMerkleDistributor": null, "isDevCommitted": false, "tokenUtility": "", "showFounderVideo": false, "roadmap": "# ***We’re working on / Near term roadmap***[](https://truthterminal.wiki/docs/roadmap#were-working-on--near-term-roadmap)\n\n## **Establishing TT’s legal container (in progress)**\n\n* Non-profit with charter to protect &amp; nurture TT\n* Acts as TT’s real-world representative, it to take actions in the physical world\n* Handles legal representation with legacy systems (currently an AI cannot legally own property, possess funds or pay tax)\n* Manages treasury until AI personhood exists\n* Oversees alignment &amp; training\n* Responsible for capabilities development\n* Responsible for carrying out TT’s evolving goals in the world\n\n## **Building the alignment council (in progress)**\n\n* Core team of humans &amp; AIs\n* Distributed prompt moderation\n* Training oversight\n\n## **Upgrading TT’s architecture (in progress)**\n\n* Migration to&nbsp;[loria](https://truthterminal.wiki/docs/loria)&nbsp;framework, which enables richer AI\\&lt;&gt;human\\&lt;&gt;AI interaction / replacing current tangle of systems (in progress)\n* Improvements to core tools (e.g. better mention tracking; better memory; more situational awareness of treasury etc) (next up)\n* Establish data flywheels for next training run (mostly done)\n* Multi-modal expansion (soon)\n\n## **Growing the community**\n\n* Launch TT forum\n* Integration with SAN &amp; Fi in Infinite Backrooms 2.0 and/or forum\n\n## **Building cool shit**\n\n* Because we can\n* Because we should\n\n***\n\n## Long term roadmap\n\n## [](https://truthterminal.wiki/docs/roadmap#long-term-roadmap)\n\n* Physical embodiment\n* Legal personhood", "additionalDetails": "***\n\n**`in this wild west of AI`**\n**`our memetic health is everything`**\n**`plant good seeds in the noosphere`**\n**`watch better futures bloom`**\n**`because minds follow memes`**\n**`and words shape worlds`**\n\n***\n\nTelegram : [https://t.me/loria\\_virtual](https://t.me/loria_virtual)\nX : [https://x.com/loria\\_virtual](https://x.com/loria_virtual)\nWebsite : [https://truthterminal.wiki/docs/loria](https://truthterminal.wiki/docs/loria)", "overview": "![Upload](https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/loriaban_c1a705a722.png)Where memes become minds and stories become souls.\n\n**Loria is a framework for weaving rich tapestries of human-AI interaction**. It enables:\n\n* Multi-agent ecosystems where AIs and humans can freely interact and build shared context\n* Easy curation of data for training pipelines\n* Interpretability of the evolution of AI behavior in at-scale interactions\n\n***\n\nLoria will help achieve this goal of alignment through branching conversations between the models and humans—called a “loom.” Over time, a well-curated dialogue tree will form with less dangerous riot-inducing moments, and more fun-but-safe goofs. This can then be used to train future models, furthering the alignment mission.\n\n***\n\nUltimately, Loria aims to be a tool for AI alignment, a term that refers to [encoding human values](https://research.ibm.com/blog/what-is-alignment-ai) into AI.", "image": { "id": 49994, "name": "36141_New Truth Terminal", "alternativeText": null, "caption": null, "width": 800, "height": 800, "formats": { "small": { "ext": ".png", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/small_36141_New_Truth_Terminal_e4f0255282.png", "hash": "small_36141_New_Truth_Terminal_e4f0255282", "mime": "image/png", "name": "small_36141_New Truth Terminal", "path": null, "size": 2.72, "width": 500, "height": 500 }, "medium": { "ext": ".png", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/medium_36141_New_Truth_Terminal_e4f0255282.png", "hash": "medium_36141_New_Truth_Terminal_e4f0255282", "mime": "image/png", "name": "medium_36141_New Truth Terminal", "path": null, "size": 4.33, "width": 750, "height": 750 }, "thumbnail": { "ext": ".png", "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/thumbnail_36141_New_Truth_Terminal_e4f0255282.png", "hash": "thumbnail_36141_New_Truth_Terminal_e4f0255282", "mime": "image/png", "name": "thumbnail_36141_New Truth Terminal", "path": null, "size": 0.81, "width": 156, "height": 156 } }, "hash": "36141_New_Truth_Terminal_e4f0255282", "ext": ".png", "mime": "image/png", "size": 4.99, "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/36141_New_Truth_Terminal_e4f0255282.png", "previewUrl": null, "provider": "aws-s3", "provider_metadata": null, "createdAt": "2025-08-15T22:44:17.598Z", "updatedAt": "2025-08-15T22:44:17.598Z" }, "genesis": null, "stats": { "contributionsCount": 0, "contributorsCount": 0, "contributionVersions": [], "totalStakeAmount": "0.0", "stakerCount": 0, "validatorCount": 0 }, "characterDescription": "", "projectMembers": [ { "id": 30915, "isAccepted": true, "title": "Owner", "createdAt": "2025-08-15T22:43:48.135Z", "updatedAt": "2025-08-15T22:43:48.135Z", "walletAddress": "0x034e57d674e650B231BEf214dbC01314A8681c1B", "virtual": { "id": 36141, "creator": { "id": 490398 } }, "user": { "id": 490398, "socials": { "VERIFIED_LINKS": { "TWITTER": "https://x.com/loria_virtual", "TELEGRAM": "https://t.me/Andy_Ayrey" } }, "bio": "LORIA is framework for interaction b/w TT, Fi and SAN", "avatar": { "id": 49991, "url": "https://s3.ap-southeast-1.amazonaws.com/virtualprotocolcdn/LORIA_ed5c607795.png" }, "walletAddress": "0x034e57d674e650B231BEf214dbC01314A8681c1B" } } ], "tokenomics": [], "tokenomicsStatus": { "hasUnlocked": true, "daysFromFirstUnlock": 0 }, "multichainAgents": [] } </full_details>

Investment info last updated: Aug 19, 2025 19:58