Skip to content

Commit 98fe325

Browse files
committed
speed edits
1 parent 6ec5871 commit 98fe325

File tree

3 files changed

+10
-20
lines changed

3 files changed

+10
-20
lines changed

src/components/Carousel.jsx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ const Carousel = ({ carouselData }) => {
138138

139139
// Calculate animation duration based on screen width
140140
// Smaller screens get faster animation to compensate
141-
const animationDuration = Math.max(15, screenWidth / 50);
141+
const animationDuration = Math.max(8, screenWidth / 80);
142142

143143
return (
144144
<div

src/components/DemoPage.jsx

Lines changed: 8 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -26,28 +26,17 @@ const DemoPage = () => {
2626
const codeString = `from dotenv import load_dotenv
2727
load_dotenv()
2828
29+
import asyncio
2930
from gum import gum
3031
from gum.observers import Screen
31-
import time
3232
33-
# Create observer instances
34-
screen_observer = Screen()
35-
g = gum(
36-
"Test User",
37-
screen_observer,
38-
)
33+
async def main():
34+
async with gum("Omar Shaikh", Screen()):
35+
await asyncio.Future() # run forever (Ctrl-C to stop)
3936
40-
# Start the update loop
41-
g.start_update_loop()
42-
43-
try:
44-
# Keep the main thread alive
45-
while True:
46-
time.sleep(1)
47-
except KeyboardInterrupt:
48-
# Handle clean shutdown when Ctrl+C is pressed
49-
print("Shutting down...")
50-
g.stop_update_loop()`;
37+
if __name__ == "__main__":
38+
asyncio.run(main())
39+
`;
5140

5241
const abstractText = "Human-computer interaction has long imagined technology that understands us—from our preferences and habits, to the timing and purpose of our everyday actions. Yet current user models remain fragmented, narrowly tailored to specific applications, and incapable of the flexible, cross-context reasoning required to fulfill these visions. This paper presents an architecture for a general user model (GUM) that can be used by any application. The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted natural language propositions that capture that user's behavior, knowledge, beliefs, and preferences. GUMs can infer that a user is preparing for a wedding they're attending from a message thread with a friend. Or recognize that a user is struggling with a collaborator's feedback on a draft paper by observing multiple stalled edits and a switch to reading related work. GUMs introduce an architecture that infers new propositions about a user from multimodal observations, retrieves related propositions for context, and continuously revises existing propositions. To illustrate the breadth of applications that GUMs enable, we demonstrate how they augment chat-based assistants with contextual understanding, manage OS notifications to surface important information only when needed, and enable interactive agents that adapt to user preferences across applications. We also instantiate a new class of proactive assistants (GUMBOs) that discover and execute useful suggestions on a user's behalf based on the their GUM. In our evaluations, we find that GUMs make calibrated and accurate inferences about users, and that assistants built on GUMs proactively identify and perform actions of meaningful value that users wouldn't think to request explicitly. From observing a user coordinating a move with their roommate, GUMBO worked backward from the user's move-in date and budget, generated a personalized schedule with logistical to-dos, and recommended helpful moving services. Altogether, GUMs introduce new methods that leverage large multimodal models to understand unstructured user context—enabling both long-standing visions of HCI and entirely new interactive systems that anticipate user needs.";
5342

@@ -127,6 +116,7 @@ except KeyboardInterrupt:
127116
border: '1px solid rgba(255, 255, 255, 0.1)',
128117
transition: 'all 0.2s ease',
129118
boxShadow: '0 2px 5px rgba(0, 0, 0, 0.1)',
119+
userSelect: 'none'
130120
}}
131121
onClick={toggleAbstract}
132122
onMouseOver={(e) => e.currentTarget.style.backgroundColor = 'rgba(255, 255, 255, 0.1)'}

src/components/LeftPane.jsx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -142,7 +142,7 @@ const LeftPane = ({ selectedHour, onTimeChange, activity }) => {
142142
<span style={{ fontSize: '14px', color: '#999' }}>GIF Screen</span>
143143
</div>
144144
<p style={{ margin: '15px 0 10px 0', fontSize: '16px' }}>
145-
The user is <b>{activity.charAt(0).toLowerCase() + activity.slice(1)}</b>
145+
<b>{activity}</b>
146146
</p>
147147
</div>
148148

0 commit comments

Comments
 (0)