Discover more from Coding with Intelligence
Big update to Google's Bard and OpenAI ships a new completion model (with logprobs!)
Week 38 of Coding with Intelligence
This is a very powerful idea: it integrates data from Drive, Docs, Gmail, YouTube, Maps, Flights, and Hotels. Check out how well it does on your data.
Apparently it reaches 1800 Elo in chess https://twitter.com/GrantSlatton/status/1703913578036904431 and https://gist.github.com/grantslatton/8ae9d5bfe0f9e26bb5211a32b799abd3 Note the endpoint includes logprobs that can sometimes be useful, read more here: https://twitter.com/alexgraveley/status/1704169124467749090
Some cool ideas in here to treat LLMs as function calls with structured output in a very Python-native style. To get a real-world example, the chess example Gist link of gpt-3.5-turbo-instruct in this issue uses it.
Compression Is All You Need? Also check out this earlier post of Ilya Sutskever's talk about unsupervised learning as compression https://www.youtube.com/watch?v=AKMuA_TVz3A
Implementation is also available on GitHub https://github.com/facebookresearch/nougat Exciting how many high quality tokens this will unlock for training/retrieval.
tl;dr 8% GSM8K and 50% Big-Bench Hard performance by letting the LLM find a better prompt. Paper title "Large Language Models as Optimizers"
Useful to get a "lay of the land". The infra map is very helpful if you're a builder.
Want more? Follow me on Twitter! @ricklamers