Towards Better AI Branding
The Copy Problem
I’ve noticed some AI companies struggling with their copy. One early Anthropic ad reads, “Move your work up and to the Claude.” Another says, “Intelligence so big, you’d swear it was from Texas.” On the Perplexity site, the subhead for the features Pro Search and Deep Research is the same: “Our most powerful search, ideal for complex questions.”
There seems to be a shared strategy behind these messages: let the product speak for itself. The worry that hyper-specific copy might pigeonhole the product gives way to a hyper-cautious response of vague messaging, fuzzy metaphors, and the lack of a compelling call to action. In trying to reach wide swaths of people with varying levels of AI competency, there’s a major risk of only capturing those with pre-existing ideas of what the product might do for them.
In a nascent space, the initial product positioning is less a true representation of the company’s vision and more a hedge against negative responses to the space at large. Without tried-and-true patterns for talking about AI, companies seem to be taking a middle-of-the-road approach to their brand, dropping their values in branding breadcrumbs and waiting for consumer consensus to become clear. But operating on this hesitancy is silly. There is no way to play the middle on a road that doesn’t yet exist, and companies are underestimating the value of a strong position and a flexible brand.
Friendly vs. Futuristic
I’m interested in OpenAI and Anthropic, as the branding of their consumer products is influenced by and reflects back on their respective research organizations. Despite working on similar research problems, their values, and by extension, their brands, are considerably different. Anthropic has routed its brand towards friendliness, and OpenAI leans futuristic. Still, this is not a dichotomy—both OpenAI’s ChatGPT and Anthropic’s Claude are heavily anthropomorphic and thus friend-like, and by the nature of their research, the two companies are inherently futuristic. In recent weeks, OpenAI has updated its 4o model, updating the model’s personality and giving users more choice in customizing it. Claude’s personality, on the other hand, was baked in from the beginning—much of Anthropic’s safety positioning is based on the idea that Claude’s personality should be highly controlled to avoid potentially destructive behaviors, steering it towards benign friendliness. The relative positioning really matters, especially as new consumer AI products come on the scene and establish themselves alongside the big players—often on the extremes.
OpenAI: The Future is Neutral
Though Artificial General Intelligence, AGI, is a lofty idea, OpenAI has positioned it as an outcome that’s not drastically different from the major technological innovations of the past, made clear in a Super Bowl XXIV spot that drums up hype for its products while comforting the risk-averse. The ad is representative of the strategic position OpenAI is trying to wield—one where its research is novel and world-changing but will eventually become as commonplace as the automobile or the internet.
The recent in-house rebrand marks a clear shift towards the human, against the machine. It is not a total departure from the original brand, which may well be the problem— the old brand has been scrapped for parts, but it no longer represents the values or the visual style of the new brand. OpenAI’s early brand work by Ben Barry and Ludwig Pettersson made use of form-based visual systems that echoed the structure of the brand’s “blossom” logomark—lots of hexagons, triangles, and circles—and grounded the brand in primitive forms, perhaps a way to represent the building blocks of its computer graphics work for Sora. The rebrand, which leans into nature, is quite a departure from this geometrical edginess. There’s some strong motion work by Studio Dunbar, which focuses on two marks: the “point,” a black circle mark that represents generative possibility, and the “emotive point,” a blue circle mark that evokes the ocean and oscillates in response to user inputs. The motion is a solid attempt to make generative outputs feel natural, but interestingly, this work doesn’t get much use within ChatGPT. The abstract cover images across the site have been replaced by a mixture of human photography and Sora outputs, but there’s a disjointness that results—every image on the site can be reasonably labeled as human or machine. The rebrand makes an effort to eliminate the tension between the geometric edginess and nature by combining them in the visual system, but the strategy is not cohesive. There’s a neutral feeling that emerges from this patchwork, as the brand plays into the values of both the old and new brand without visually reconciling the shift. I’d be interested to see more dynamic motion work incorporated into the product, especially if it leaned into a visual style that blends the feeling of the old brand, representative of all its abstract possibilities, and the human feeling of the new brand—maybe it would lean into the transformative feeling of the dithered animations in the Super Bowl ad.
Anthropic’s Anthropomorphism
Anthropic’s branding of Claude is a reaction to the same problem that OpenAI grapples with: skepticism of large-scale AI systems. Its solution is to prioritize safety and trust at every level of the brand. The color scheme, illustration system, marketing videos, and UI all function to make “Claude” feel like a human, a friend or coworker, but stay true to the safety roots of the brand. This is a delicate balance, as it makes marketing efforts much harder—there’s a neutral tone to much of Anthropic’s ad copy that stands in stark contrast to how its own users and employees talk about using Claude. The solution here is to use this juxtaposition to its advantage. The best attempts only slightly lean into the anthropomorphism but are wildly effective—in one marketing video, employees demo a new tool by “ask[ing] Claude” to complete tasks. The anthropomorphism of Claude is a double-edged sword. On the one hand, it shows off the level of detail put in to construct its personality and makes usage more deliberate, but on the other hand, it could be unnatural and misleading for a general audience. When marketing the product, it’s hard to bridge the gap between what users love about Claude and high-level business speak about intelligent AI and productivity. When marketing the product, it’s hard to bridge the gap between what users love about Claude and high-level business speak about intelligent AI and productivity.
The brand’s friendly elements mostly present visually. Attempts to incorporate the warmth into its web copy mostly fall flat, like “Talk to Claude about anything—it’s literally always up for a yap.” The spirit of the brand shines in the illustration system by Geist, though underutilized—it uses scribbly drawings of nodes and hands to convey the fun, dynamic experience of using Claude in its style while staying true to its research underpinnings. These drawings don’t make it into the product, which uses neutral iconography and components without much ornamentation. There’s limited motion work on the site, which is great for usability, but I’d like to see more relevant iconography incorporated. The brand’s orange logomark is front-and-center on the homepage and wiggles as content loads in the chat—I think that’s a great opportunity to introduce other icons and unify the system with animations.
Anthropic’s marketing content mostly centers around its safety efforts, and its long-form videos with employees reap the benefits of scientific communication done well, especially for its audience of developers and researchers. There’s a lot of potential for growth still in trying to capture people that think about AI progress but are not technical, though I think this communication would do better in a more robust campaign rather than in a one-time billboard that refers to Claude as “AI, built with a moral compass.” Unfortunately, safety tends to be framed with negative terms like “not destructive,” which isn’t billboard-worthy, and the reverse framing might motivate an audience to think about what an AI’s “moral compass” should be, but doesn’t convert them into users. I see safety as a backbone that establishes transparency and trust and shines through the brand, but it has to be shown alongside other brand values. Just as no company can rely on the technical feats of its underlying models, it cannot rely on its own ethical framework for for both building and marketing its products. I think as Anthropic scales, its greatest brand strategy is to embrace the dissonance in calling Claude friendly and moral in the same breath—after all, this is what makes Anthropic’s brand so unique.
Brand Evolution
What has become very clear is flexibility is the best brand asset to have. The major players are toeing the line between friendly and futuristic, which is why the brands don’t seem fully cohesive yet. I think the dichotomy will fade over time as the culture around consumer AI tech changes, and there will be many more axes to position a brand on. A truly flexible brand is anchored around a strong position that can be adapted for each subset of the intended audience. As companies flesh out the distinguishing characteristics of their products, we’ll see greater consistency in their copy, visual systems, and the overall brand. There’s still a lot of room to grow—it’s too early to declare which products have won which audiences over—which means now is the best time to rectify brand weaknesses and come out stronger.