Skip to content

Commit

Permalink
two notes
Browse files Browse the repository at this point in the history
  • Loading branch information
courtlandleer committed May 11, 2024
1 parent aa8b973 commit c2f4426
Show file tree
Hide file tree
Showing 2 changed files with 31 additions and 0 deletions.
14 changes: 14 additions & 0 deletions content/notes/Context window size doesn't solve personalization.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
title: "{{title}}"
date: 05.11.24
tags:
- notes
- ml
---
There are two reasons that ever increasing and even functionally infinite context windows won't by default solve personalization for AI apps/agents:

1. **Personal context has to come from somewhere.** Namely, from your head--off your wetware. So we need mechanisms to transfer that data from the human to the model. And there's *[[There's an enormous space of user identity to model|a lot of it]]*. At [Plastic](https://plasticlabs.ai) we think the path here is mimicking human social cognition, which is why we built [Honcho](https://honcho.dev)--to ambiently model users, the generate personal context for agents on demand.

2. **If everything is important, nothing is important**. Even if the right context is stuffed in a crammed context window somewhere, the model still needs mechanisms to discern what's valuable and important for generation. What should it pay attention to? What weight should it give different pieces of context in any given moment? Again humans do this almost automatically, so mimicking what we know about those processes can give the model critical powers of on-demand discernment. Even what might start to look to us like intuition, taste, or vibes.

All that said, better and bigger context window are incredibly useful. We just need to build the appropriate supporting systems to leverage their full potential.
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
---
title: "{{title}}"
date: 09.11.24
tags:
- notes
- ml
- cogsci
---
While large language models are exceptional at [imputing a startling](https://arxiv.org/pdf/2310.07298v1) amount from very little user data--an efficiency putting AdTech to shame--the limit here is [[User State is State of the Art|vaster than most imagine]].

Contrast recommender algorithms (which are impressive!) needing mountains of activity data to back into a single preference with [the human connectome](https://www.science.org/doi/10.1126/science.adk4858) containing 1400 TB of compressed representation in one cubic millimeter.

LLMs give us access to a new class of this data going beyond tracking the behavioral, [[LLMs excel at theory of mind because they read|toward the semantic]]. They can distill and grok much 'softer' physiological elements, allowing insight into complex mental states like value, belief, intention, aesthetic, desire, history, knowledge, etc.

There's so much to do here though, that plug-in-your docs/email/activity schemes, user surveys are laughably limited in scope. We need ambient methods running social cognition, like [Honcho](https://honcho.dev).

As we asymptotically approach a fuller accounting of individual identity, we can unlock more positive sum application/agent experiences, richer than the exploitation of base desire we're used to.

0 comments on commit c2f4426

Please sign in to comment.