forked from jackyzha0/quartz
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #61 from plastic-labs/chl_blog
note
- Loading branch information
Showing
3 changed files
with
33 additions
and
1 deletion.
There are no files selected for viewing
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
To us: it's obvious. But we get asked this a lot: | ||
|
||
> Why do I need to personalize my AI application? | ||
Fair question; not everyone has gone down this conceptual rabbithole to the extent we have at [Plastic](https://plasticlabs.ai) and with [Honcho](https://honcho.dev). | ||
|
||
Short answer: people like it. | ||
|
||
In the tech bubble, it can be easy to forget about what *most* humans like. Isn't building stuff people love our job though? | ||
|
||
In web2, it's taken for granted. Recommender algorithms make UX really sticky, which retains users sufficiently long to monetize them. To make products people love and scale them, they had to consider whether *billions*--in aggregate--tend to prefer personalized products/experiences or not. | ||
|
||
In physical reality too, most of us prefer white glove professional services, bespoke products, and friends and family who know us *deeply*. We place a premium in terms of time and economic value on those goods and experiences. | ||
|
||
The more we're missing that, the more we're typically in a principal-agent problem, which creates overhead, interest misalignment, dissatisfaction, mistrust, and information asymmetry: | ||
|
||
<iframe src="https://player.vimeo.com/video/868985592?h=deff771ffe&color=F6F5F2&title=0&byline=0&portrait=0" width="640" height="360" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe> | ||
|
||
But, right now, most AI applications are just toys and demos: | ||
|
||
![[Honcho; User Context Management for LLM Apps#^18066b]] | ||
|
||
It's also why everyone is obsessed with evals and benchmarks that have scant practical utility in terms of improving UX for the end user. If we had more examples of good products, ones people loved, killer apps, no one would care about leaderboards anymore. | ||
|
||
> OK, but what about services that are purely transactional? Why would a user want that to be personalized? Why complicate it? Just give me the answer, complete the task, etc... | ||
Two answers: | ||
|
||
1. Every interaction has context. Like it or not, people have preferences and the more an app/agent can align with those, the more it can enhance time to value for the user. It can be sticker, more delightful, "just work," and entail less overhead. (We're building more than calculators here, though this applies even to those!) | ||
2. If an app doesn't do this, it'll get out-competed by one that does...or by the ever improving set of generally capable foundation models. |