Lightning AI Studios: Never set up a local environment again →

← Back to blog

ChatGPT Inspires Virtual AI “Wife”, DeepMind Introduces DreamerV3, and OpenAI Teases Plans for Premium Chat

Researchers demonstrate surprising differences in causality-based localization vs. knowledge editing in language Models. OpenAI teased a premium version of ChatGPT and a DIY programmer built a virtual “wife” with the free version. NASA wants help prioritizing what digital resources it should open source first and we’re teaching folks to train a Scikit-learn model on the cloud. Let’s dive in!

Research Highlights: 

Researchers from DeepMind introduced DreamerV3, a general and scalable algorithm based on world models that claims to outperform previous approaches across a wide range of domains with fixed hyperparameters. Applied out of the box, DreamerV3 claims to be the first algorithm to collect diamonds in Minecraft from scratch without human data or curricula, a long-standing challenge in artificial intelligence.

To incorporate and adjust to natural language corrections, Stanford researchers presented a framework they’re calling Language-Informed Latent Actions with Corrections (LILAC). Compared to existing learned baselines covering both open-loop instruction following and single-turn shared autonomy, the research claims that their corrections-aware approach obtains higher task completion rates, and is subjectively preferred by users because of its reliability, precision, and ease of use.

According to a paper published by Google’s AI research team, we can alter how a fact is stored in a model by adjusting weights that are situated elsewhere than what is assumed by standard methods. This finding raises questions about how past work relies on Causal Tracing to select which model layers to edit. Their findings imply that a greater mechanistic understanding of the inner workings of pretrained language models does not necessarily lead to insights into the most effective means of altering their behavior.

ML Engineering Highlights

A programmer created a virtual AI “wife” using ChatGPT, Stable Diffusion and other text-to-speech and computer vision tools. “I became obsessed with decreasing her latency,” the programmer said of his simulation waifu, before he tearfully euthanized her. “I’ve spent over $1000 in cloud computing credits just to talk to her.”

OpenAI gave a strong hint that it plans to start charging for ChatGPT. As part of its efforts to “ensure [the tool’s] long-term viability,” OpenAI has announced on its official Discord server that it is “starting to think about how to monetize ChatGPT.”

Bird Buddy announced the impending release of a hummingbird feeder equipped with a camera that can identify individual birds. Their model can tell the difference between 350 unique hummingbird species, thanks to data collected from a camera designed to keep up with the birds’ lightning-fast flight.

Open Source Highlights

The NASA Science Mission Directorate is interested in learning more about the needs of the public research community in terms of data, software, and computing resources. Thanks to advancements in cloud computing and network infrastructure, users from afar can now access massive data sets and utilize remote computing resources, such as those run by NASA. The Open-Source Science initiative is being pushed forward by the space agency, and its leaders are interested in learning which data sets and computing resources will be most beneficial to the scientific community as a whole, both in terms of furthering individual lines of research and facilitating collaboration among researchers working in different fields.

Tutorial of the Week

Have you ever wanted to train a Scikit-learn model on the cloud, but had no clue where to start? We’ve got you covered with a tutorial that’ll show you how to migrate from a local machine to a cloud provider as your datasets and models scale.

Community Spotlight

Want your work featured? Contact us on Slack or email us at [email protected]

  • This recently-landed, community-led PR is a quick, elegant fix for a problem related to the MLFlowLogger. Shoutout to Adrien Bufort for their sharp eye and rapid solution!
  • Another week has passed, and we’ve got another set of docs PRs to tickle your fancy. You can check them out here and here. In the open-source world, docs really do take a village. Kudos to all our awesome contributors for your hard work!
  • Our community label on GitHub is a great way to get started with PRs related to Lightning. Want to make a contribution? Check it out !

Lightning AI Highlights

We recently shared a technique for using Lightning to enhance your Stable Diffusion prompts with GPT-3. You can get started by running your own autoscaled Stable Diffusion server.

Forums are back! We’ve given the official Lightning forums a refresh. We hope that you’ll find them a useful resource to ask questions, respond to community members, and build together. (This elder millennial newsletter contributor swears by forums! Our Slack isn’t going anywhere, though. Think of this like having your cake and eating it too.)

Don’t Miss the Submission Deadline

  • ACL 2023: The 61st Annual Meeting of the Association for Computational Linguistics. July 9-14 2023 (Toronto, Canada). Full paper submission deadline: January 20, 2023
  • ICML 2023: Fortieth International Conference on Machine Learning. Jul 23-29 (Honolulu, Hawaii). Full paper submission deadline: January 26, 2023 08:00 PM UTC
  • IROS 2023 :International Conference on Intelligent Robots and Systems. Oct 1 – 5, 2023 (Detroit, Michigan). Full paper submission deadline: March 1, 2023
  • ICCV 2023: International Conference on Computer Vision. Oct 2 – 6, 2023. (Paris, France). 1. Full paper submission deadline: March 8, 2023 23:59 GMT