Carolyn Meinel's Research Project

BestWorld: Aiding AI Alignment with a Human/AI System that Generates True, Believable, and Beneficial Information

AI alignment can be considered as the combination of subsets of the disciplines of computer security, quality control, and forecasting of alternate conditional outcomes such as those my colleagues and I experimented with under IARPA's FOCUS program. I have experience and in many cases refereed research papers on these topics. I am viewing AI Alignment through the lenses of these disciplines.

Bottom line: Hybrid Human/Many-Federated-LLMs Systems could play a role in AI Alignment

In part for this project, four colleagues and I, as team leader, are building a human/AI system that:

Our planned knowledge product:

We believe that a hybrid human/AI system such as what we are preparing to test:

Inspiration for this research includes my participation in:

So far, we have:

Plans B, C, and D we have been evaluating for proof of concept experiments:

We launched a BestWorld newsletter, with two more newsletters in development.

Data sources: Evaluated and tested some for both newsletters and hybrid forecasting:

Opened a bank account for our nonprofit.

Performed reanalyses of my prior AI/LLM research:

Reviewed Christopher Karvetski's and my 2019 hybrid human/AI system

Pending tasks:

So now I'm the stage of ... Move slowly and don't break things... until I get feedback on this report from BlueDot judges and my teammates at BestWorld..

P.S. In case you are wondering who I am and why I'm here --->

Site map

© 2024 Carolyn Meinel. All rights reserved.