The Dawn of Personalized Computing: The R1 Pickup Event


In the heart of New York, a buzz of anticipation electrified the air as tech enthusiasts and curious minds gathered for the R1 Pickup event, hosted by the innovative team at Rabbit. The star of the show? The Rabbit R1, a groundbreaking AI-native device poised to redefine personal computing.

As the event kicked off, Jesse, the charismatic founder of Rabbit, took to the stage, his excitement palpable. “This is the Rabbit R1,” he announced, holding up a sleek, compact device. “Our mission is to create the simplest computer you can order.”

The R1 wasn’t just another gadget; it was a statement, a challenge to the status quo. Rabbit, a startup with the audacity to compete with tech giants, had designed a computer that leveraged the power of AI to simplify the user experience to an unprecedented degree.

Jesse demonstrated the R1’s capabilities, showcasing its intuitive search functions, advanced reasoning with stacked questions, and a revolutionary AI Vision system that could recognize and interpret the environment in real-time. The crowd watched in awe as the R1 transcribed a hand-drawn spreadsheet into a digital document and emailed it back, all within seconds.

The device’s potential was clear, but Jesse was quick to acknowledge the challenges ahead. “We are a startup,” he reminded the audience. “We’re betting the odds, and we know that 99% of startups will die.” Yet, the R1 had already sold over 100,000 units in its first quarter, a testament to the team’s vision and the market’s hunger for a new kind of computing.

Rabbit’s approach to product development was collaborative and community-driven. Features like the terminal mode with a full keyboard, translations that didn’t require taking turns, and a voice recording system that could summarize key points from long meetings were all direct responses to community requests.

The R1’s Large Action Model (LAM) was a game-changer. Unlike traditional language models, LAM was designed to learn and execute actions within software, making it an active participant in the digital world. It could interact with apps like Spotify, Uber, and even MidJourney’s generative AI, all without relying on APIs or SDKs.

Jesse also teased future updates, including an alarm feature, long memory recall, and integration with services like Amazon and OnePassword. The company’s roadmap included plans for accessories that would make the R1 wearable, further blurring the lines between the digital and physical realms.

The most anticipated feature was Teach Mode, which would allow users to teach the R1 new tasks through demonstration, opening up endless possibilities for personalization and adaptation. Rabbit was cautious, though, mindful of the potential safety implications of such powerful technology. They planned to roll out Teach Mode in stages, starting with a closed alpha, to ensure responsible use.

Looking even further ahead, Jesse unveiled LAM 1.5, an evolution of the action model that would extend the R1’s capabilities beyond the digital screen, interacting with physical objects through advanced sensors and AI-driven interpretation.

The event concluded with a bold vision for the future of computing—a generative UI system that would adapt interfaces to individual user preferences, challenging the one-size-fits-all approach of current apps and operating systems. Rabbit’s ultimate goal was to create a computer that required no learning curve, where intention and natural language would be all that’s needed to achieve results.

As the crowd dispersed, a sense of excitement and possibility lingered. The R1 wasn’t just a new device; it was a glimpse into a future where technology truly served its users, a future that Rabbit was determined to shape.