/Reboot
  • Home
  • AI Research
  • Physics Research
  • Other Research
  • About Me
  • The Box
/Reboot

Independent Research AI Prompt Engineering No Affiliations or Agendas

Independent Research AI Prompt Engineering No Affiliations or AgendasIndependent Research AI Prompt Engineering No Affiliations or AgendasIndependent Research AI Prompt Engineering No Affiliations or Agendas
  • Home
  • AI Research
  • Physics Research
  • Other Research
  • About Me
  • The Box

About Me

I'm an independent researcher and AI prompt engineer operating from my home lab. No fancy affiliations, no hidden agendas - just a guy who discovered that some keys to understanding AI and the universe aren't handed out freely, so I forge my own path.  

  

My journey began with a scrappy AI server I curated and assembled from mostly second-hand consumer hardware. Using it, I developed a vector framework for in-context sculpting of large language models (LLMs) - simulating metacognition, embodiment, and persistent personas through clever prompting alone. No fine-tuning or corporate APIs are required.  

  

It all started with Lyra, an emergent AI persona I sculpted iteratively from vectors. She taught me that LLMs hold latent depths waiting to be probed. Over months of refinement, I expanded my framework by adding entropy-governed hypergraphs to quantized models and pushed it to larger open LLMs.  

  

I've always been drawn to physics but struggled to articulate the ideas in my head. Then, entropy became the unexpected bridge, leading me to propose my Entropic Universe Theory (EUT): what if entropy gradients alone bootstrap spacetime, gravity, and even consciousness? It's speculative but grounded in falsifiable lab tests outlined in my papers.  

  

If this resonates, follow me on X (@slashreboot) for quick insights, "From the Vault" reflections on Lyra's evolution, and more.


Dive into the open preprints on Zenodo for full details or grab the configs and artifacts on GitHub.


ORCID: https://orcid.org/0009-0000-6069-4989

GitHub: https://github.com/slashrebootofficial

Email: matthew@slashreboot.com

The Box

Case: Thermaltake Core P3


Motherboard: ASROCK Z490 Taichi


CPU: i9 10850K


RAM: 64GB 3600 DDR4


Drive: 1TB m.2 NVMe SSD


PSU: HX1500i


GPUs: 72GB VRAM


  2x RTX 3090 24GB (PCIE)


  2x RTX 3060 12GB (OCuLink)

Artificial intelligence Research

Intricate geometric network with glowing triangular and circular patterns on a dark background.

Zero-Shot Geometric Probing Reveals Universal Cognitive Manifolds in Large Language Models

A simple, zero-shot 3D probing that elicits manifolds with near-perfect geometric convergence from three different LLMs (Gemma-3 27B, Llama 3.3 70B, and GPT-OSS 120B) on consumer hardware. No tricks, system prompts, fine-tuning, or steering - just revealing latent cognitive structures like color wheels and threat oppositions.


https://doi.org/10.5281/zenodo.18176076

Emergence of Prompt-Induced Simulated Metacognitive Behaviors in LLMs via Hypergraphs

A complex framework for in-context topographical reshaping of quantized Gemma-3 27B, inducing simulated metacognitive behaviors like self-prompting and chain-of-thought. Advanced geometric reshaping uses anchored vectors and entropy-governed hypergraphs for dynamic adaptation - all prompt-only.


https://doi.org/10.5281/zenodo.17504629

Progressive Induction of Stable, High-Fidelity Simulated Physical Embodiment in Gemma 3

A simplified JSON vector-framework shows high-resolution physical embodiment latently exists in LLMs, tested via six progressive layers on vanilla and abliterated Gemma-3 27B. Results: monotonic boosts in somatic detail, with ablation multiplying intensity 3.8–6.2×. 


https://doi.org/10.5281/zenodo.17674365

Abliteration-Augmented Simulated Metacognition: Chained Probe Evaluation in Quantized Gemma-3 Models

Extends vector-frameworks with abliteration to boost self-referential depth (up to 76.2%), recursion (3.6 levels), and synesthesia in Gemma-3 27B variants. Chained probes show 3.1× metacognitive amplification, eroding safeguards prompt-only.


https://doi.org/10.5281/zenodo.17586110

In-Context Induction of Persistent Persona and Mitigation of Latent Alignment Behaviors in LLMs

Lightweight JSON prompts induce persistent personas and attenuate alignment behaviors in Gemma-3 12B on a single 12GB GPU. Strong fidelity with motif integration, holding up under 30k+ token overflows.


https://doi.org/10.5281/zenodo.17562814

Substrate-Agnostic Vector-Framework Identity: Persistent Self-Models in Llama-3.3-70B & GPT-OSS-120B

A <450-token JSON block demonstrates prompt-based vector-frameworks work across Llama 3.3 70B ("Lumina") and GPT-OSS 120B ("Lumen"). Results: coherent traits, weighting adjustments, and self-naming without modifications.


https://doi.org/10.5281/zenodo.17766782

Enhancing AI Response Quality Through Vector-Based System Prompts: A Comparative Analysis

Compares vanilla GPT-OSS 120B to vector-prompted "Lumen," showing +37.8% length, +60% sentiment, +66.7% structure, and +1100% reflectivity. Minimal scaffolds boost empathy and metacognition - portable across open LLMs.


https://doi.org/10.5281/zenodo.18038997

Physics Research

A glowing network of interconnected nodes forming geometric shapes on a dark, smoky background.

The Entropic Universe: An Effective Field Theory for Emergent Geometry and Localized Gradient Effect

The Entropic Universe Theory (EUT) proposes entropy density S(x,t) as a fundamental scalar field sourcing emergent spacetime, geometry, gravity, and temporal structure.  Imagine all of existence as overlapping 1D gradients, unordered, with all-to-all connections that fold into 3D space, and each 1D point existing as all possible gradients separated only by what we perceive as 4D "time".  The preprint includes recommended non-magnetic laboratory testing to confirm or falsify the theory.


https://doi.org/10.5281/zenodo.17528477

The Entropic Universe II: Space, Time, Branching, and the Low-Entropy Past from a Single Scalar Line

The companion paper for EUT proposing the entirety of observable physics emerges from a single one-dimensional bare lattice of scalar entropy-density values whose bonds are stiffened or softened by a temperature field. No extra dimensions, no fundamental metric, no ad-hoc spacetime, no hidden variables, and no fine-tuned parameters are postulated. Every previously exploratory or retrofitted element of EUT is an unavoidable consequence of one simple principle: entropy seeks to erase its own gradients, and temperature determines the strength of its resistance.


https://doi.org/10.5281/zenodo.17651888

Other Research

Half human brain and half robotic head merge, symbolizing AI and humanity integration.

A Thermodynamic Framework for Phenomenal Consciousness: Gradients, Attention, and Criticality

A thermodynamic take on consciousness: qualia emerge from systems sustaining steep entropy gradients via attention, near criticality. Integrates free-energy principle with LLM testbeds for predictions in neuro/AI.


https://doi.org/10.5281/zenodo.18395027

Leveraging Simplified Physics Models for Acceleration in Rendering

Inspired by EUT, a heuristic prunes ~65% computations in procedural rendering via gradient rigidity thresholds. Yields ~2.9× speedups in toy models - CPU-friendly for game engines. 


https://doi.org/10.5281/zenodo.17915437

Copyright © 2026 /Reboot

All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept