Skip to main content

Elon Musk Presents Tesla’s Optimus Robot

In the sense that robots are mechanical extensions of the Internet of Things, we think they’re a factor in Web3, with new robots being developed with emotion sensing to perform assistive tasks like caring for the elderly, or even Flippy the Robot, who makes fries at White Castle. 

About a year ago, Elon Musk presented a man in a spandex suit doing the robot while announcing the start of design on a Tesla android. Last week, we got to see the first results. While the prototype robot presented by Tesla didn’t have the agility of Boston Dynamics’ Atlas, the parkour-famous robot (watch this if you haven’t seen it in action) or the cuteness of Honda’s Asimo, there are a few factors which might make it one for the history books. Designed to accomplish manual tasks and designed as much as possible to replicate human movement with fingers, it’s already performing basic tasks at a station in a Tesla factory. The announcement suggested a reasonable timeframe of about 10 years before a consumer might have one in their home.

First, the robot is intended for mass production, with Musk suggesting thousands or millions of them eventually being produced. This means simple construction using relatively common materials, with the mention of durable plastics for external body construction to keep it unharmed in a fall. Other robots in development don’t have that mass-production, make-it-common-like-a-toaster vibe. 

Second, the robot is running on the same AI used for self-driving in their cars. This means that unlike other robots in development, the AI is hyper-advanced and tested in more real-world scenarios. In a video example, we see what the robot sees as it waters plants, with objects being continuously identified in the environment in the same manner as a Tesla is able to distinguish between objects in the world as it drives itself.

Third, their goal is to price it at about $20,000 USD. And that’s the scary part. What happens to the economy when an employee capable of basic tasks can be purchased outright for less than hiring one for a year? One robot learning a task, say making a Pumpkin Spice Latte, means the rest can learn it through a cellular data update.

Implication for marketers: In a perfect world this could mean more humans are tasked with providing high-touch personal service while robots do more manual work, but with the increasing trend of self-serve payment from QSR to Grocery we’ll have to see.

Meta Previews Text-to-Video AI

Also in the “well that’s a bit scary” category, hot on the heels of the explosion in text-to-image AI Art tools like DALL·E 2 and Stable Diffusion, Meta has announced the creation of a text-to-video system. 

While we haven’t chatted with anyone who has access yet, and it appears to be only an announcement of its existence so far, with a Google Form available to indicate your interest in getting early access, the demos provided are very well executed, though not comparable to HD video. 

Users will be able to enter text to create a video in the same way AI Art tools use text prompts to create an image, but new features will include the ability to upload an image and have it animate into a video, or even provide two photos and a video will be created that fills in the gaps between the two as video. The system can also take a video input and create a video inspired by that as a prompt.

Implication for marketers: While it’s going to be a lot easier to create a quick video to present an idea, as AI Art tools are showing us, it will still require a creative human touch to get the desired output, and the videos don’t include audio yet, so traditional video production isn’t going anywhere soon.

Niantic Welcomes Us to the Metaearth

We’ve previously written about Niantic, the creator of the AR game (and one of the most downloaded mobile games ever) Pokemon GO. Niantic recently did two exciting things. First, they launched Lightship, a developer toolset that allows any creator, brands included, to use their augmented reality toolsets for creating AR experiences that involve everything from multiple users interacting in the same space to projection mapping over buildings with their “Visual Positioning System.” Second, they purchased a company called 8th Wall, it’s what we prefer to use in developing web-based AR experiences for our clients, as it provides a lot of the same advanced features that used to be available only in app-based AR experiences. And no one likes downloading an app for a few minutes of fun in, say, a retail promotion. A mobile web experience is much faster and cleaner for everyone involved.

Now, Niantic is combining parts of these two services by bringing their Visual Positioning System to web-based augmented reality experiences. What the heck does that mean?

This advancement means brands can do things like:

  • Provide location-aware AR experiences for things like scavenger hunts
  • Present AR overlays and animations that are geo-contextual, like a giant floating AR blimp above a particular Target location
  • Offer location-relevant deals or social content locked to a particular physical location, at a mall for example

VPS currently has over 100,000 locations mapped globally to provide centimetre-level location accuracy, using data from years of collection using their games.

Implication for marketers: Web-based AR just became a lot more advanced and useful for more complex promotions that can provide higher-long term engagement, as well as for more tactical offers. 

Looking for more on Web3? Download our white paper here.


Also published on Medium.

55% PREFER PRIME DELIVERY TO BUYING AT RETAIL

SCS surveyed 750 US consumers on how their physical and digital buying habits have changed during the pandemic. These insights and more are presented in Omnichannel Overdrive.

Download the white paper →