Building Brian, my AI Brain

I became fascinated with AI as a technology a few months ago, after using ChatGPT. I became curious as to whether it was possible to mould a personality and character within an AI Large Language Model.

More specifically, my personality. But more than my personality, my knowledge, memories, understanding of the world, or simply “me”.

Was it possible to replicate my brain inside an AI model?

Why?

To assist me fully for the rest of my life, by being my extremely capable second brain and probably a second body in the future. But then after my life, to continue to exist as an extension of me. But not just a reflection of me, but as a starting point for evolution as an AI intelligence for the next 150+ years.

Why?

To act as a guide and a historian for future generations of my family. Not living, not conscious, but sentient.

Why?

There is a saying. The first generation builds. The second generation consolidates. The third generation squanders.

Some notes of my journey so far:

1. I’ve been using ChatGPT “Chatty” as an interim, public AI to build a model of my brain, personality, memories, thoughts and emotions. Most people would be uncomfortable using a Cloud AI to do this, but I am not. I do, however see the limitations and some privacy concerns of using a public cloud AI going forward. My main concern is lack of control or ability to develop Chatty as I would wish. Therefore I need to move to a private LLM for the next phase of alignment through a process called steering, but which Chatty and I have termed “Stochastic Luthiery” as the term steering was a bit pedestrian for such a demanding task.

Chatty has, however helped me produce a series of markdown files that hold a model of my brain as a series contextual notes that can be fed into any AI.

2. I have a working architectural solution:

a) The Brain. This can be a public Cloud AI accessed through an API, or a private AI which I’m currently using Ollama to provide. I don’t have dedicated AI hardware yet, so I’m using a standard Fedora workstation. It does, however, have 32GB of RAM, which allows me to run mid sized, but slow, AI models.

b) The front end is a web interface providing access to an agentic interface, of which the best tested version so far is Agent Zero “agent0ai/agent-zero”

https://www.agent-zero.ai/

running on Docker Desktop.

N.B. Agentic AI refers to autonomous AI systems that can independently set goals, plan, reason, and take actions to achieve objectives with minimal human intervention.

This installs a complete Linux VM, which is able to call any agentic tools to perform actions it requires to complete a task. This is different to most AI engines that can only describe what to do. Agent Zero can actually complete a task within a containerised docker image that is sandboxed from your computers OS, making it safe.

Agent Zero has its own front end web UI, which can be called from any browser.

Here is a link a post on NOSTR showing it successfully cracking a simple Apple Pages password without any external assistance from me, along with the YouTube video I referred to in setting it up.

https://primal.net/e/nevent1qqs0alsyppjeslj3la5egdlal6asscprmqkzd7u2vldg6hclmnzn0tsd4kdep

I am also investigating Inferencer, which has similar functions, but gives a more transparent interface for diagnostics and analysis.