Written in Rust  ยท  Concurrent reads ยท ฮผs writes  ยท  ~11MB binary

The Intuition Database

An in-memory decision database โ€” your app learns from every interaction and makes smarter decisions automatically.

$ docker run -d -p 8080:8080 simeonlukov/banditdb:latest
$ pip install banditdb-python

Building a self-learning system today is a plumbing nightmare

Four moving parts. Four failure modes. Weeks to build. Months to maintain.

โœ— The old way
๐Ÿ“จ Kafka โ€” event streaming
โšก Redis โ€” state management
๐Ÿ Python worker โ€” matrix math
๐Ÿ˜ Postgres โ€” interaction logs
โœ“ The BanditDB way
๐ŸŽฐ BanditDB โ€” everything, in one binary
  • โœ“ In-memory matrix updates (microseconds)
  • โœ“ Built-in exploration algorithms, tunable per campaign
  • โœ“ Write-Ahead Log for crash recovery
  • โœ“ TTL cache for delayed rewards
  • โœ“ Propensity-logged Parquet export
  • โœ“ Native MCP tools for AI agents

How it works

Four API calls โ€” one to define, three to learn.

BanditDB keeps weight matrices in memory. Every outcome you report updates those weights in microseconds, gradually building intuition about which choice wins for which context.

1

Create

run once at startup

Name the campaign, list the arms, and set the context dimension. BanditDB initialises the weight matrices and is immediately ready to serve predictions.

curl -X POST http://localhost:8080/campaign \
  -H "Content-Type: application/json" \
  -d '{
    "campaign_id": "sleep",
    "arms": [
      "decrease_temperature",
      "decrease_light",
      "decrease_noise"
    ],
    "feature_dim": 5
  }'
โ†ป loops with every participant
2

Predict

Pass the participant's context vector. Get back the recommended intervention and an interaction ID to track it.

curl -X POST http://localhost:8080/predict \
  -H "Content-Type: application/json" \
  -d '{
    "campaign_id": "sleep",
    "context": [1.0, 0.35, 0.50, 0.60, 0.96]
  }'

# โ†’ {"arm_id": "decrease_temperature",
#    "interaction_id": "a1b2c3..."}
3

Act

Apply the chosen intervention. BanditDB holds the context in its TTL cache, ready to receive the reward when the outcome is known.

# arm = "decrease_temperature"
apply_intervention(user_id, arm)

# lower bedroom temperature to 17ยฐC
# BanditDB waits for the outcome...
4

Reward

Report the outcome the next morning. Matrices update in microseconds. Every subsequent participant gets a smarter recommendation.

curl -X POST http://localhost:8080/reward \
  -H "Content-Type: application/json" \
  -d '{
    "interaction_id": "a1b2c3...",
    "reward": 0.27
  }'

# โ†’ "OK"
register with claude
$ claude mcp add banditdb banditdb-mcp --env BANDITDB_URL=http://localhost:8080
create_campaign
Define a new decision
get_intuition
Ask which action to take
record_outcome
Report success or failure
campaign_diagnostics
Inspect learning state
For AI Agents

Your agent swarm builds shared intuition

Standard LLM agents are stateless โ€” if they make a bad decision, they repeat it tomorrow. BanditDB's built-in MCP server gives your entire agent swarm shared persistent memory.

Two commands and BanditDB is a native tool in Claude. Your agents can get intuition, record outcomes, and inspect learning state โ€” no config file editing required.

Every decision made by any agent in the network improves the routing for all future agents.

Any decision with a measurable outcome

BanditDB works wherever you can define a context, a set of choices, and a reward.

๐Ÿค–

LLM Routing

Route tasks to the right model. Learn which model wins for which task type across your entire agent fleet.

๐Ÿ’ฐ

Dynamic Pricing

Learn which price maximises revenue for which customer profile. Adapt in real time as behaviour changes.

๐ŸŽฏ

Personalisation

Show the right content, offer, or layout to each user segment without a data science team.

๐Ÿฅ

Clinical Trials

Adaptive trial designs that route patients to the most promising treatment arm as evidence accumulates.

๐Ÿ›’

Checkout Optimisation

Learn which upsell offer converts best for which cart composition and customer history.

โš–๏ธ

Legal Intake Routing

Route inbound matters to the right response โ€” consult, intake form, referral, or decline โ€” based on case value, capacity, and conflict risk.

Walkthroughs

Learn by example

Six end-to-end examples, ordered from simplest to most advanced.

โ˜… Start here
๐ŸŒ™

Sleep Improvement

Temperature, light, or noise โ€” which adjustment works best for each person? A pure curl walkthrough, no SDK needed.

1
Measurable lift after ~300 rewarded outcomes โ€” assuming โ‰ฅ 70% next-morning reporting compliance.
curl 3 arms ยท 5 features Read walkthrough โ†’
๐Ÿ›’

E-Commerce Upsell

Discount, free shipping, or nothing โ€” learns which checkout offer closes each shopper without giving margin away.

2
Measurable lift after ~1,500 checkout interactions at 50% completion โ€” binary reward is noisiest of the four examples.
python 3 arms ยท 3 features Read walkthrough โ†’
โš–๏ธ

Law Firm Client Intake

Consult, intake form, refer, or decline โ€” learns which response maximises matter value for each enquiry profile, accounting for capacity and conflict risk.

3
Measurable lift after ~600 intake decisions at 50% outcome rate โ€” reward is multi-valued, not binary.
python 4 arms ยท 5 features Read walkthrough โ†’
๐Ÿ’ฐ

Dynamic Pricing

Hold margin or liquidate? Learns from sell-through rate, holiday proximity, and competitor pricing โ€” context describes the market, not the user.

4
Measurable lift after ~500 hourly cycles on common states โ€” rare holiday combinations require 2โ€“3 full seasons.
python 4 arms ยท 5 features Read walkthrough โ†’
๐Ÿค–

Prompt Optimisation

Learns which prompt strategy โ€” zero-shot, chain-of-thought, few-shot, structured โ€” produces the best response for each task type. Your evals run in production, not in a spreadsheet.

5
Measurable lift after ~400 requests โ€” LLM-as-judge gives near-100% reward observability.
python 4 arms ยท 5 features Read walkthrough โ†’
๐Ÿฅ

Adaptive Clinical Trials

Routes patients toward the most effective treatment arm in real time as evidence accumulates โ€” no waiting months for interim analysis.

6
Measurable lift after ~400 completed follow-ups โ€” enroll ~500 patients to account for 80% compliance.
python 3 arms ยท 4 features Read walkthrough โ†’
~11MB
Native binary for Linux, macOS, and Windows
ฮผs
Matrix updates via Sherman-Morrison rank-1 formula
~10K
Predictions per second on a single node
+16.7%
Lift over random on MovieLens 100K โ€” up to +24.6% with feature engineering
WAL
Crash-safe durability with Parquet export

Start in one command

No sign-up. No cloud account. No configuration required.

Binary โ€” Linux, macOS, Windows

$ curl -fsSL https://raw.githubusercontent.com/dynamicpricing-ai/banditdb/main/scripts/install.sh | sh

Docker

$ docker run -d -p 8080:8080 simeonlukov/banditdb:latest