Skip to content

Commit e775a13

Browse files
committed
move shit around
1 parent cb33ce0 commit e775a13

File tree

1 file changed

+33
-21
lines changed

1 file changed

+33
-21
lines changed

src/components/DemoPage.jsx

Lines changed: 33 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -95,6 +95,36 @@ except KeyboardInterrupt:
9595
</a>
9696
</div>
9797

98+
99+
<div style={
100+
{
101+
marginTop: '20px',
102+
marginBottom: '20px',
103+
textAlign: 'center'
104+
}
105+
}>
106+
<h3 style={{
107+
color: 'var(--color-main-text)',
108+
margin: '0 0 15px 0',
109+
fontSize: '20px',
110+
fontWeight: '600',
111+
textAlign: 'center',
112+
width: '100%'
113+
}}>
114+
Abstract
115+
</h3>
116+
<p style={{
117+
lineHeight: '1.6',
118+
margin: '0',
119+
fontSize: '15px',
120+
textAlign: 'justify',
121+
paddingLeft: '20%',
122+
paddingRight: '20%'
123+
}}>
124+
Human-computer interaction has long imagined technology that understands us—from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific applications, and incapable of the flexible, cross-context reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that can be used by any application. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted natural language propositions that capture that user's behavior, knowledge, beliefs, and preferences. GUMs can infer that a user is preparing for a wedding they're attending from a message thread with a friend. Or recognize that a user is struggling with a collaborator's feedback on a draft paper by observing multiple stalled edits and a switch to reading related work. GUMs introduce an architecture that infers new propositions about a user from multimodal observations, retrieves related propositions for context, and continuously revises existing propositions. To illustrate the breadth of applications that GUMs enable, we demonstrate how they augment chat-based assistants with contextual understanding, manage OS notifications to surface important information only when needed, and enable interactive agents that adapt to user preferences across applications. We also instantiate a new class of proactive assistants (GUMBOs) that discover and execute useful suggestions on a user's behalf based on the their GUM. In our evaluations, we find that GUMs make calibrated and accurate inferences about users, and that assistants built on GUMs proactively identify and perform actions of meaningful value that users wouldn't think to request explicitly. From observing a user coordinating a move with their roommate, GUMBO worked backward from the user's move-in date and budget, generated a personalized schedule with logistical to-dos, and recommended helpful moving services. Altogether, GUMs introduce new methods that leverage large multimodal models to understand unstructured user context—enabling both long-standing visions of HCI and entirely new interactive systems that anticipate user needs.
125+
</p>
126+
</div>
127+
98128
<div style={{
99129
margin: '30px auto',
100130
maxWidth: '90%',
@@ -127,10 +157,12 @@ except KeyboardInterrupt:
127157
margin: '0',
128158
fontSize: '15px'
129159
}}>
130-
We introduce General User Models (GUMs) that understand users across all applications. GUMs observe how you use your computer, infer your behaviors, knowledge and preferences, and help applications better serve your needs. Unlike traditional user models that are app-specific, GUMs provide a unified understanding of users that can power proactive assistants, contextual recommendations, and personalized experiences across your entire digital life.
160+
This paper introduces a general user model (GUM) that helps computers better understand users' actions, needs, and preferences by observing any interaction you have with your computer (with a vision-language model). By using GUMs, apps and assistants can proactively suggest and execute helpful actions without users needing to explicitly ask.
131161
</p>
162+
132163
</div>
133164

165+
134166
<div style={{display: 'flex', gap: '20px'}}>
135167
<div
136168
style={{
@@ -242,26 +274,6 @@ except KeyboardInterrupt:
242274
</SyntaxHighlighter>
243275
</div>
244276

245-
<div style={{ marginTop: '20px' }}>
246-
<h3 style={{
247-
color: 'var(--color-main-text)',
248-
margin: '0 0 15px 0',
249-
fontSize: '20px',
250-
fontWeight: '600',
251-
display: 'flex',
252-
alignItems: 'center'
253-
}}>
254-
Abstract
255-
</h3>
256-
<p style={{
257-
lineHeight: '1.6',
258-
margin: '0',
259-
fontSize: '15px'
260-
}}>
261-
Human-computer interaction has long imagined technology that understands us—from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific applications, and incapable of the flexible, cross-context reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that can be used by any application. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted natural language propositions that capture that user's behavior, knowledge, beliefs, and preferences. GUMs can infer that a user is preparing for a wedding they're attending from a message thread with a friend. Or recognize that a user is struggling with a collaborator's feedback on a draft paper by observing multiple stalled edits and a switch to reading related work. GUMs introduce an architecture that infers new propositions about a user from multimodal observations, retrieves related propositions for context, and continuously revises existing propositions. To illustrate the breadth of applications that GUMs enable, we demonstrate how they augment chat-based assistants with contextual understanding, manage OS notifications to surface important information only when needed, and enable interactive agents that adapt to user preferences across applications. We also instantiate a new class of proactive assistants (Gumbos) that discover and execute useful suggestions on a user's behalf based on the their GUM. In our evaluations, we find that GUMs make calibrated and accurate inferences about users, and that assistants built on GUMs proactively identify and perform actions of meaningful value that users wouldn't think to request explicitly. From observing a user coordinating a move with their roommate, Gumbo worked backward from the user's move-in date and budget, generated a personalized schedule with logistical to-dos, and recommended helpful moving services. Altogether, GUMs introduce new methods that leverage large multimodal models to understand unstructured user context—enabling both long-standing visions of HCI and entirely new interactive systems that anticipate user needs.
262-
</p>
263-
</div>
264-
265277
<div style={{ marginTop: '20px' }}>
266278
<h3 style={{
267279
color: 'var(--color-main-text)',

0 commit comments

Comments
 (0)