[412-code-talk] Fwd: 82% of Americans want to slow AI development 🏃, Anthropic releases Claude Instant ⏩, understanding grokking 🧠

Mark Rauterkus Mark at rauterkus.com
Thu Aug 10 09:15:56 EDT 2023


Subject: 82% of Americans want to slow AI development
Plus, grokking
52% think there needs to be government regulation and a vast majority more
believe large tech giants can't be trusted to self regulate

Sign Up
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Ftldr.tech%2Fai%3Futm_source=tldr/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/cB5jq03vStJ7S39I5BaHu48K-V6rrvWSIyTCHNbzrss=313>
|Jobs
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Ftldr.tech%2Ftalent/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/8k-k1bBO5qVVou7Jvrzeua0e3KOgTR-C1Nz98gXJq3Q=313>
|Advertise
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fshare.hsforms.com%2F1OxvmrkcFS4qsxKpNXCi76wee466%3Futm_source=tldrai%26utm_medium=newsletter/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/s8s9_E4kg-OftB1j4QaOafHBR0uRL193gXFeoMPr5Q4=313>
|View Online
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Factions.tldrnewsletter.com%2Fweb-version%3Fep=1%26lc=078d99d6-b44d-11ed-ba38-55928061a93d%26p=99659d9c-3763-11ee-aabc-3f083537c2b1%26pt=campaign%26t=1691672817%26s=ca4fb3b684b2c28331b923e99bcd4f478c0ec0a39dac024d2860187b31534aed/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/94gnuRR-rR88YQQNNxZ87GBhiTFS-o0uXnod6ybCxmw=313>
TLDR
*TLDR AI 2023-08-10*
🚀
*Headlines & Launches*
*82% of Americans think we should slow down AI development (7 minute read)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fwww.axios.com%2F2023%2F08%2F09%2Fai-voters-trust-government-regulation%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/WzT45lMQ6Q1ThU-iZvTX7wAgIBJ0C6WSzjyuoY0021k=313>

In a new Axios survey of 1001 people across one week in July, participants
expressed their opinions on a variety of topics around AI safety and
capabilities development. 52% think there needs to be government regulation
and a vast majority more believe large tech giants can't be trusted to self
regulate.
*Anthropic Launches Improved Version Of Its Entry-Level LLM (3 minute read)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Ftechcrunch.com%2F2023%2F08%2F09%2Fanthropic-launches-improved-version-of-its-entry-level-llm%2F%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/icXlXnNyaMPbACFbQT9aCSura7JbfSzHX6NU2KSQEuo=313>

Anthropic has released Claude Instant, an updated version of its faster,
cheaper, text-generating model. Claude Instant generates longer, more
structured responses, follows formatting instructions better, and shows
improvements in quote extraction, multilingual capabilities, and question
answering. It is available through an API.
*Inworld AI Becomes the Best-Funded Startup in AI x Gaming (8 minute read)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Finworld.ai%2Fblog%2Finworld-valued-at-500-million%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/mOnGCEd-BvyfPtCgqAcOUlYdioRS-S8pcgjzfJU9Xu0=313>

Inworld AI announced a new $50M+ round led by Lightspeed Venture Partners,
bringing the total valuation of the company to over $500M. This will allow
Inworld to accelerate R&D efforts, hire top talent, build a more robust
Character Engine, expand infrastructure, and open source parts of its
platform.
🧠
*Research & Innovation*
*Understanding Grokking (21 minute read)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fpair.withgoogle.com%2Fexplorables%2Fgrokking%2F%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/dozd6dfBnYT3ssIDTzbWjvlxPmX5h09d6nkY7UmhqHo=313>

The PAIR group at Google has released a lovely explainer that goes deep
into the topic of Grokking. Grokking is the dynamic process a model goes
through during training that may point to a shift from memorizing to
understanding. It isn't well understood in general, but this is a lovely
introduction that covers much of the groundwork for this strange phenomenon.
*Photorealistic synthetic data in unreal engine (17 minute read)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Farxiv.org%2Fabs%2F2308.03977%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/wulc6oidf-Zzm7boadMkincwnWptBA1tz0BVvT8ICq4=313>

Don't have millions of photorealistic images to train your algorithm? Maybe
you can generate them using PUG from Meta AI. It uses the powerful unreal
game engine in a controllable way to generate synthetic image data for
downstream training.
*Simple synthetic data reduces sycophancy (23 minute read)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Farxiv.org%2Fabs%2F2308.03958%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/1JZzu5MKorx2N7NPgYLwPl4Hn5NpoKeKATQA5le9I7I=313>

Sycophancy is when a model repeats and adopts a user's opinion. This
happens more in larger models and instruction-tuned models. It can also
occur in tasks when the opinion is irrelevant, leading to bubble-like
behavior. Simple synthetic data fine-tuning can prevent this without
harming overall performance.
🧑‍💻
*Engineering & Resources*
*Get 80% end-to-end test coverage in 4 months (Sponsor)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fwww.qawolf.com%2F%3Futm_campaign=Get80PercentEndToEnd08102023%26utm_source=tldrai%26utm_medium=newsletter/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/ai2C1v-ycFgFoiJx6BMTMSoYiO5GFIgmUZrUcod3PwE=313>

There’s a shortcut to reaching high automated test coverage, without
spending years on scaling in-house teams. The answer? It's not AI. It's QA
Wolf.
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fwww.qawolf.com%2F%3Futm_campaign=Get80PercentEndToEnd08102023%26utm_source=tldrai%26utm_medium=newsletter/2/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/9Y2VqCrz4roH97LPqoNFUuV4E0-KgEK_PMsGWB2_wsk=313>

QA Wolf gets you to 80% automated end-to-end test coverage in 4 months
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fwww.qawolf.com%2F%3Futm_campaign=Get80PercentEndToEnd08102023%26utm_source=tldrai%26utm_medium=newsletter/3/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/DsU6hNSbtKDCjiK2oeTKJfdiJJyjwI8UMA7AEEMtKwI=313>.
Plus, they do all the test maintenance, provide unlimited parallel test
runs on their infrastructure, and send human-verified bug reports directly
to your issue tracker.

Skeptical? Schedule a demo to learn more.
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fwww.qawolf.com%2F%3Futm_campaign=Get80PercentEndToEnd08102023%26utm_source=tldrai%26utm_medium=newsletter/4/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/OB3qWL4w1V4DraGGqz879KPxovkVl_RncsCZLUjYIFU=313>

PS: QA Wolf has a 4.8/5 ⭐ rating on G2 and they have multiple case studies
of customers saving $480k+ on QA engineering.
*A New Method for Improving Student Networks in Computer Vision (GitHub
Repo)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fgithub.com%2Famirmansurian%2Faicsd%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/ldvH9XqQJcPsIwA9uLCL62aoWWIWV9-sKldfFjaLG-Q=313>

Deep neural networks have excelled in computer vision, but faster inference
times are needed. This paper introduces the Inter-Class Similarity
Distillation method and an Adaptive Loss Weighting strategy for better
knowledge transfer from a teacher network to a student one.
*Integrate Private Data into LLMs while Preserving Privacy (GitHub Repo)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fgithub.com%2Frcgai%2Fsimplyretrieve%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/fmCDfhKahpG1e0uqXWcfV665Cb6VjXoS7kneexv1HMU=313>

Generative AI systems have grown with the help of Large Language Models.
The SimplyRetrieve open-source tool offers a user-friendly way to integrate
private data into these systems without extra tuning using the
Retrieval-Centric Generation approach. It promises enhanced AI performance
while ensuring privacy.
*Magentic (GitHub Repo)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fgithub.com%2Fjackmpcollins%2Fmagentic%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/6-Kyg0ZdD6jFGKhUFHFTO3trKF_YyLJp_JW6QjY-7dA=313>

Magentic makes it easy to integrate Large Language Models (LLMs) into your
Python code. Treat prompt templates as functions, using type annotations to
specify structured output. Then, seamlessly mix LLM queries and function
calling with regular Python code to create complex LLM-powered
functionality.
🎁
*Miscellaneous*
*Google Is Working On ‘Brain2Music’ (2 minute read)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Findianexpress.com%2Farticle%2Ftechnology%2Fartificial-intelligence%2Fgoogles-brain2music-ai-can-listen-to-your-brain-signals-to-reproduce-music-you-listened-to-8882357%2F%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/jQnEtFHxjAd0ncAdsn7TckD_3Ir4dMyQ65gBGvTTJeo=313>

Google is working on a new AI called ‘Brain2Music’ that uses brain imaging
data to generate music. Researchers say the AI model can generate music
that closely resembles parts of songs a person was listening to when their
brain was scanned.
*White House Announces ‘AI Cyber Challenge’ (3 minute read)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fwww.engadget.com%2Fthe-white-houses-ai-cyber-challenge-aims-to-crowdsource-national-security-solutions-170003434.html%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/4S1gpsI_f0dLjpo8fpeRpM9BvsjfQntl1a1GyFv4aS0=313>

The Biden Administration revealed its plans to better defend the nation’s
critical digital infrastructure at the Black Hat USA Conference in Las
Vegas on Wednesday: it's launching a DARPA-led challenge competition to
build AI systems capable of proactively identifying and fixing software
vulnerabilities.
*Llama From Scratch (20 minute read)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fblog.briankitano.com%2Fllama-from-scratch%2F%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/AFexqBn3m4gaCzGKMKbF_NNgRjb6fLGetc13B6StGPU=313>

A step-by-step guide for using the Llama paper to train TinyShakespeare.
⚡
*Quick Links*
*Parea AI - the developer toolkit for debugging and monitoring LLM apps
(Product)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fwww.parea.ai%2F%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/yF_IXy7dVl3zMQ7Pn-G9QiaoKDykBWqek-W6_QRaHRY=313>

Experiment with prompts & model configurations in a versioned manner.
Evaluate prompts with custom-defined Python evaluation metrics on a large
scale. Monitor LLM applications via API and view analytics on a dashboard.
*Fastest way to tune Llama (Colab Link)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fcolab.research.google.com%2Fdrive%2F1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/IAEuj5P2bWHCwCD_n7K5P7UCO18oF4R-YYNtYCAXtzk=313>

Upload your JSONL data to your drive, link it, and run this notebook with
QLoRA and SFT training to get a custom-tuned Llama2 model. This seems to be
the most minimal example I have found for tuning and works well. Most
importantly, the model uses a (prompt, response) format.
*api2ai (GitHub Repo)*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fgithub.com%2Fmquan%2Fapi2ai%3Futm_source=tldrai/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/BH9aewngOB74RDJEcUYeOX6FQFK544EyUYA3eOHwgsg=313>

Create an API assistant from any OpenAPI spec.

*TLDR Talent*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fdanni763618.typeform.com%2Fto%2FrSL4lOH3/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/op6ZB4zBM48bIM7m0OX_8C54p0Ffs8Xec0SFgiGn9hw=313>
is our exclusive community where we help world-class tech talent and get
intros to companies of their choice, along with a number of exciting
startups and tech companies curated by TLDR.

We give you full control of the process, you can specify if you’re actively
searching or passively interested only if something amazing comes along.
Set your preferred compensation, seniority/title/role, specific companies
(or types of companies) you’d like to work for and more. *Click here to
apply*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fdanni763618.typeform.com%2Fto%2FrSL4lOH3/2/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/P0wmI8_sCAQ0JfAEMND9VGkkqjcCtegKafp-7liK4K0=313>
.

If your company is interested in reaching an audience of AI professionals
and decision makers, you may want to *advertise with us*
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Fshare.hsforms.com%2F1OxvmrkcFS4qsxKpNXCi76wee466%3Futm_source=tldrai%26utm_medium=newsletter/2/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/q6zqoX5qToBbuOIOyTWvQJR37O0Hn7ONw4g1c3aYjEg=313>
.

If you have any comments or feedback, just respond to this email!

Thanks for reading,
Andrew Tan
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Ftwitter.com%2Fandrewztan/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/W7Ru1xITJRL5CyGHY8MK_jk5yL9_aqm4FB4zzw5pGRc=313>
& Andrew Carr
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Ftwitter.com%2Fandrew_n_carr/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/DBxT5O5O4MIVUp0lDqsl7B3bEjU8aFwj-S5BGZ9oJPY=313>

If you don't want to receive future editions of TLDR AI, please click here
to unsubscribe
<https://tracking.tldrnewsletter.com/CL0/https:%2F%2Factions.tldrnewsletter.com%2Funsubscribe%3Fep=1%26l=eedf6b14-3de3-11ed-9a32-0241b9615763%26lc=078d99d6-b44d-11ed-ba38-55928061a93d%26p=99659d9c-3763-11ee-aabc-3f083537c2b1%26pt=campaign%26pv=4%26spa=1691672427%26t=1691672817%26s=c23cc4a82d95bc0903338a78a6a761a7e1dcb8915c8143cb36d3657675c5c3db/1/01000189df8eaf35-1dc6d6f4-a4e1-4e84-bed0-0a1ac5ea1bf9-000000/lu_8pQTUsDj7rr50kg5WmreZtHeRBjqs1LRueSO5Mgk=313>.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://pairlist10.pair.net/pipermail/412-pals-talk/attachments/20230810/3f97c256/attachment.htm>


More information about the 412-PALs-talk mailing list