Drama at OpenAI continues … just kidding, here are some actually useful LLM/AI resources!
Week 47 of Coding with Intelligence
200K context window, reduced hallucination (actually learned to say "I don't know based on the provided information), tool use (OpenAI Functions equivalent), and a new playground in the console.
Beats Pika Labs in some benchmarks, on par with RunwayML
Really awesome project, from the same guy that had a ChatGPT API before ChatGPT had an API.
Use multiple OpenAI keys to increase your effective rate limits.
Samples at https://styletts2.github.io/ by Columbia University.
Metasploit modules, Nuclei templates and CSRF templates.
At 570B tokens the model does appear to be undertrained but the MoE structure might influence Chinchilla optimal scaling laws
Reimplementation of Meta's Segment Anything yields 8X speedup using PyTorch optimization features & a custom Triton kernel.
You can now ask questions about videos through open source models. There's also a paper https://arxiv.org/abs/2311.10122
SLM = small language model. Reduction between 40% to 58% in factual errors.
Useful paper if you're working with self-correction prompting techniques.
Very important result: verify whether a test set is in the pre-training data without needing access to model weights or pre-training data. This can be used to validate the accuracy of many benchmarks.
Very interesting pruned activation approach opening up the potential for incredible performance speedups. By ETH Zurich.
Categories: Basic Query Engines, Router Query Engine, SubQuestion Query Engine, Text2SQL, Pydantic Programs, Data Agents
Want more? Follow me on Twitter! @ricklamers