Frequently Asked Questions
Common questions about Intrafere, MOTO, our research, and our business.
About Intrafere
What is Intrafere?
Intrafere is a physics research and technology company based in Milwaukee, Wisconsin. The legal entity is Intrafere LLC, and we operate publicly under the brand name Intrafere Research Group (a DBA of Intrafere LLC). We study fundamental physics — particularly Markov and non-Markovian mathematical analysis, information theory applied to physical systems, decoherence, and hierarchical systems with scale differentials — and translate those insights into practical technologies.
What does Intrafere actually do?
We have two connected arms. Our public arm releases open-source technologies like MOTO, publishes engineering-focused white papers, conference papers, preprints, and supports a growing research community. Our B2B arm delivers industrial synthetic training data generation, custom orchestrator development, and physics-led technology solutions to enterprise customers through licensing and direct IP transfer.
Is Intrafere a nonprofit?
Not yet – stay tuned for charitable initiatives as we continue to expand this rapidy growing startup. Intrafere LLC is a for-profit limited liability company. While we release significant work as open source and publish research publicly, we are not a nonprofit or charitable organization. Donations to Intrafere support our open-source work but are not tax-deductible as charitable contributions.
What is ComputEyes?
ComputEyes (ComputEyes.com / @ComputEyes) is a research-first initiative from Intrafere dedicated to studying how society should responsibly prepare for and handle AGI. As researchers who build and study autonomous superintelligent systems firsthand, we believe the question of how AGI will reshape labor markets, governance, economic structures, and daily life is one of the most consequential and understudied challenges facing modern society. ComputEyes is our commitment to producing responsible, engineering-grounded insights on AGI readiness — including practical policy research, regulatory guidance, and B2B development support — so that progress does not leave people behind.
Commercial parties interested in partnering with our in-construction efforts can inquire at Partners@intrafere.com.
Can academic institutions or non-profits get access to custom superintelligence harnesses?
Academic institutions, non-profits, and publicly funded research laboratories are encouraged to reach out to Partners@intrafere.com to discuss whether Intrafere is able to assist with custom software design and donation of academic and lab-grade superintelligence harnesses tailored to their appropriate field of study. Availability is limited and subject to our capacity, but we are genuinely committed to supporting research that advances public knowledge.
For commercial organizations seeking the same custom orchestrator capabilities as a paid service, visit our Custom Orchestrator Solutions page.
Where is Intrafere located?
Intrafere LLC is based in Milwaukee, Wisconsin, USA. We are proud to be an American company contributing novel open-source technologies and physics research.
About MOTO
How did Intrafere discover ASI?
In late October 2025, MOTO was originally built with only the brainstorming mechanism — the Aggregator — and was intended as a tool for exploring the full range of variations a model could produce in response to a single prompt. It was an experiment in exhaustive idea extraction, and was originally designed as an internal research tool for Intrafere to study our complex physics research.
After reviewing the quality and depth of what the brainstorming mechanism was producing on its own, we realized we were looking at something far more significant than a looping-prompt tool. The outputs were not just diverse — they were compounding into novel insights that no individual model could produce alone. That recognition led directly to building the compiler as fast as possible and releasing MOTO publicly. The brainstorming mechanism — the Aggregator with its validated, feedback-driven solution basin exploration — is the true engine behind MOTO’s superintelligent output. The Compiler, while it represents the majority of the codebase, exists to organize and present the extraordinary material the Aggregator produces. MOTO ASI was developed by one programmer, Intrafere is now scaling and growing as a company following the successful release of MOTO ASI.
What is MOTO?
MOTO is an ASI Autonomous Deep Research Harness created by Intrafere Research Group. Due to Intrafere’s novel discovery of Top-P exploration It can produce longform answers, single academic papers, and academic volume collections rivaling a book in length — all from a single user prompt. The current release is MOTO Autonomous S.T.E.M. ASI, focused on S.T.E.M. domains where mathematical and scientific verification produces the lowest hallucination rates.
How does MOTO work?
MOTO has a two-part core architecture — the Aggregator and the Compiler — both of which achieve ASI results through novel top-p exploration (solution basin aggregation).
The Aggregator is a configurable cluster of 1–10 parallel AI submitters that funnel into a single bottleneck validator — a completely separate model instance. Submitters generate solution attempts in parallel across the solution space. The validator evaluates each one: does this entry make the knowledge base more capable of addressing the user’s goal? Accepted submissions are added to the shared brainstorm database. Rejected submissions are returned with specific feedback, guiding future attempts — every rejection is a learning signal. Iterative pruning periodically removes entries subsumed by newer, more complete ideas, keeping the database self-refining and information-dense rather than bloated.
The Compiler takes the aggregated knowledge and compiles it into coherent academic papers in a deliberate order: body sections first, then conclusion, then introduction, then abstract — ensuring each section accurately reflects what actually exists in the paper.
MOTO offers two operating modes built on this same architecture:
- Single Paper Writer Mode — The user manually controls both the primary Aggregator and Compiler prompts, giving direct control over the research focus and compilation instructions.
- Autonomous Mode — The user provides a general high-level prompt, and MOTO autonomously loops the Aggregator and Compiler to select research directions, build knowledge databases, compile papers, and compound knowledge across research cycles. This mode is fully autonomous beyond the initial user prompt, capable of running for hours or days without intervention.
Is MOTO free?
Yes. MOTO is completely free and open source on GitHub. There are no hidden fees, subscriptions, or premium tiers. We are committed to continuing to update and improve MOTO as a competitive autonomous research harness that remains freely available to everyone. You can run MOTO at zero cost if you use free-tier model access or host models locally on your own hardware.
When was MOTO released?
MOTO was launched on GitHub on January 10, 2026 and is available for download now.
What AI models and services does MOTO support?
MOTO supports flexible model configurations through two providers:
- LM Studio — Host models locally on your own hardware (port 1234)
- OpenRouter — Cloud API access with automatic fallback on credit exhaustion
You can use either or both, and you can assign different models to different roles (submitters vs. validators), allowing multi-model exploration where different AI models simultaneously explore different solution approaches. MOTO also includes a Boost Mode to selectively accelerate specific tasks with premium models.
What can I use MOTO for?
MOTO excels at tasks where deep, structured thinking and verifiable correctness are valuable:
- Mathematical research and proof exploration
- Literature review and synthesis
- Theoretical analysis and framework development
- Complex problem decomposition
- Knowledge base construction
- Academic writing assistance
The current S.T.E.M. release specifically excels at tasks where correctness can be verified through logical consistency rather than external fact-checking. Expansion into non-STEM domains is actively underway.
Why was S.T.E.M. the first release domain?
S.T.E.M. domains were chosen because mathematical and scientific verification is less externally reliant — AI models can internally verify mathematical correctness through logical consistency checks. In testing, S.T.E.M. domains demonstrated the lowest rate of hallucination. This made it the most responsible and demonstrable starting point for an autonomous research system.
How long does MOTO take to produce results?
This varies depending on task complexity, chosen models, and hardware. MOTO is designed for extended autonomous operation — you might run it for hours, overnight, or even for days on complex projects. You can monitor progress in real-time through the frontend, view intermediate outputs at every stage, and stop whenever you are satisfied with the results.
How do I know when MOTO is done?
MOTO uses self-validation techniques — the same model assesses its own weight exhaustion during completion review. As it runs longer on the same problem, the rejection rate typically rises, indicating you are approaching the limits of what your selected models can contribute. In Autonomous Research Mode, the system autonomously determines when to move from brainstorming to compilation.
Technical Questions
What are the system requirements?
MOTO is designed to be flexible. Generally, you need:
- Python 3.8+ and Node.js
- A system capable of running AI models locally (if using LM Studio), or an OpenRouter API key for cloud access
- Sufficient RAM for your chosen models (if running locally)
- Storage space for the aggregation database and ChromaDB
A one-click launcher (launch.bat for Windows, startup.sh for Linux/Mac) handles setup. See the Getting Started Guide for detailed instructions.
Can I run MOTO on limited hardware?
Yes. If you have limited RAM, you can run fewer submitters or use smaller models locally. You can also offload processing to OpenRouter’s cloud API entirely, meaning even modest hardware can run MOTO effectively as long as you have an internet connection and API credits.
Does MOTO require an internet connection?
It depends on your configuration. If you run models entirely through LM Studio (local), MOTO operates offline after initial setup — your data never leaves your system. If you use OpenRouter (cloud), an internet connection is required for API calls. You only need internet initially to download MOTO itself and any AI models you want to host locally.
What is the RAG system in MOTO?
MOTO includes a production-grade RAG (Retrieval-Augmented Generation) architecture with a 4-stage retrieval pipeline: query rewriting, hybrid recall, reranking, and packing. It uses cyclic chunk sizes (256/512/768/1024 characters) for multi-granularity exploration, and prioritizes direct content injection over RAG when content fits the context window — no truncation.
Business & Services
Does Intrafere offer custom orchestrator development?
Yes. For organizations whose needs exceed what the open-source release provides, Intrafere builds custom orchestrators with proprietary architectural modifications, extended multi-day autonomy pipelines, domain-specific submitter tuning, and deeper validation layers. These are built by the team that invented the technology, with proprietary insights that go beyond the public codebase. Contact sales@Intrafere.com for inquiries.
What is the training data business?
Intrafere generates and brokers industrial synthetic ASI training data for B2B customers. We offer both shared-rights (lower cost, we retain resale rights) and exclusive-rights (tighter ownership) pricing structures. We generate data across S.T.E.M. and soft-science domains such as governing dynamics and economics. MOTO is capable of producing creativity for most tasks, if you require creativity then a MOTO ASI harness can likely be adapted to the task.
Hard exclusions: We do not produce entertainment data or image generation datasets. Industrial training data brokerage and generation services are only for B2B customers.
Does Intrafere do consulting or contract work?
Yes. We apply novel physics and mathematical research to enterprise-grade technical problems. Core areas include measures of Markovanity in hierarchical systems, non-Markovian analysis, decoherence, and scale-differential dynamics. We typically accept only a small number of contracts — more often, we invent technologies internally and commercialize them through licenses or direct IP sales. ASI was our first product release and invention.
Licensing & Legal
What license is MOTO under?
MOTO is released under a permissive open-source license. You can use, modify, and distribute it freely. See the license file in the GitHub repository for full terms.
Can I use MOTO commercially?
Yes. The open-source license permits commercial use. You can use MOTO for business purposes, integrate it into workflows, or build services on top of it.
Can I modify MOTO?
Absolutely. The code is open source and publicly auditable. You can inspect it, modify it for your needs, and contribute improvements back to the community.
Privacy & Security
Is my data private when using MOTO?
When running with LM Studio (local models), your prompts, data, and AI outputs never leave your hardware. This is a core design principle. If you use OpenRouter (cloud API), your prompts are sent to external model providers per their respective privacy policies. You choose the balance of privacy, cost, and performance that fits your needs.
Does Intrafere collect any data about my MOTO usage?
No. MOTO has no telemetry or data collection. We do not track what you are working on, what models you use, or any other usage information.
Is MOTO secure?
MOTO is open source, meaning the entire codebase is publicly auditable. Security-conscious users can inspect every line of code. We welcome security audits from the community.
Community & Support
How can I contribute to MOTO?
We welcome contributions through our GitHub repository:
- Report bugs and issues
- Submit feature requests
- Contribute code improvements
- Improve documentation
- Share your use cases and examples
- Help other users in discussions
Visit our Community & Contributors page for detailed guidelines.
Where can I get help with MOTO?
Support resources include:
- Getting Started documentation
- GitHub Issues for bug reports
- GitHub Discussions for questions and ideas
- Use Cases & Examples
- Community-contributed guides and tips
How can I support Intrafere’s open-source work?
Follow us on social media and all platforms, tell your friends and colleagues about MOTO, contribute code or documentation, and engage with the community. If you represent a business interested in professional services, custom orchestrator development, or B2B partnerships, contact sales@Intrafere.com.
Research & Future Development
What research does Intrafere publish?
We publish engineering-focused white papers, conference papers, and preprints. Our research spans AI safety and alignment, the sociological impact of ASI and AGI, physics (particularly non-Markovian dynamics and decoherence), and the architecture behind autonomous research systems. Visit our Research Team page to explore our published work.
What is on the roadmap for MOTO?
We are committed to continuing to improve MOTO with updates that keep it a competitive, state-of-the-art autonomous research harness — freely available on GitHub. Active development directions include expanding beyond S.T.E.M. domains, safety and alignment research, and studying how autonomous superintelligent research systems relate to AGI. Visit our Development Roadmap for details.
Can I request features?
Yes. Submit feature requests through GitHub Issues. We actively consider community feedback in our development priorities.
Is Intrafere hiring?
Intrafere Research Group is a small, tight-knit group of researchers committed to making high-effort, high-impact changes in S.T.E.M. research and the corresponding B2B technology and open-source technology world. We are selective about growth because quality and alignment matter more to us than headcount. If that resonates with you, visit our Careers page to see if there are open positions.
Still Have Questions?
If your question is not answered here, contact us directly or join the discussion on our GitHub repository. We are happy to help.