Parameter Lab (@parameterlab) 's Twitter Profile
Parameter Lab

@parameterlab

Empowering individuals and organisations to safely use foundational AI models.

ID: 1640056112329240578

linkhttp://parameterlab.de calendar_today26-03-2023 18:20:43

127 Tweet

259 Followers

109 Following

Seong Joon Oh (@coallaoh) 's Twitter Profile Photo

The GAN community peaked around 2019 and has been declining since 2020. Today, there is little update in this area as Diffusion Models have taken over, which gained momentum in 2022. Visit researchtrend.ai to learn more. #ResearchTrendAI We're seeking a marketer. DM me!

The GAN community peaked around 2019 and has been declining since 2020. Today, there is little update in this area as Diffusion Models have taken over, which gained momentum in 2022.

Visit researchtrend.ai to learn more. #ResearchTrendAI

We're seeking a marketer. DM me!
Parameter Lab (@parameterlab) 's Twitter Profile Photo

🚨📄 Exciting new research! Discover when and at what scale we can detect if specific data was used in training LLMs — a method known as Membership Inference (MIA)! Our findings open new doors for using MIA as potential legal evidence in AI. More info in the 🧵below.

🚨📄 Exciting new research! Discover when and at what scale we can detect if specific data was used in training LLMs — a method known as Membership Inference (MIA)! Our findings open new doors for using MIA as potential legal evidence in AI. More info in the 🧵below.
Seong Joon Oh (@coallaoh) 's Twitter Profile Photo

🔥 Curious about the latest trends in AI? ResearchTrend.AI gives you an intuitive overview of diverse topics in AI. Dive into our dashboard featuring 55 communities ready for you to explore! 👉 Visit: researchtrend.ai/communities Browse and follow your favourite communities.

🔥 Curious about the latest trends in AI?

ResearchTrend.AI gives you an intuitive overview of diverse topics in AI. Dive into our dashboard featuring 55 communities ready for you to explore!

👉 Visit: researchtrend.ai/communities

Browse and follow your favourite communities.
Parameter Lab (@parameterlab) 's Twitter Profile Photo

🎉 We’re pleased to share the release of the models from our Apricot 🍑 paper, "Calibrating Large Language Models Using Their Generations Only", accepted at ACL 2024! At Parameter Lab, we believe openness and reproducibility are essential for advancing science, and we've put in

Seong Joon Oh (@coallaoh) 's Twitter Profile Photo

There's an internship opening at Parameter Lab : parameterlab.de/careers The research outputs have been quite successful so far: researchtrend.ai/organizations/…

Haritz Puerto @ NAACL 2025 🌵🇺🇸 (@haritzpuerto) 's Twitter Profile Photo

techcrunch.com/2025/01/09/mar… From time to time we hear news like this. However, proving that an LLM was trained on a specific document is very challenging 🥴 This motivated my latest work, where we show that current methods can be effective if we use enough data 🧐

Seong Joon Oh (@coallaoh) 's Twitter Profile Photo

We just wanted to say: Membership inference is unlikely to succeed on n-grams or even paragraphs. Language models require **multiple documents** to gather enough evidence to determine whether specific data points were included in training. Accepted to #NAACL2025 Findings.

Parameter Lab (@parameterlab) 's Twitter Profile Photo

🔎 Wonder how to prove an LLM was trained on a specific text? The camera ready of our Findings of #NAACL 2025 paper is available! 📌 TLDR: longs texts are needed to gather enough evidence to determine whether specific data points were included in training of LLMs:

Parameter Lab (@parameterlab) 's Twitter Profile Photo

👥 We're Hiring: Senior/Junior Data Engineer! 📍 Remote or Local | Full-Time or Part-Time At ResearchTrend.AI, we’re building a platform that connects researchers and AI engineers worldwide—helping them stay ahead with daily digests, insightful summaries, and interactive

👥 We're Hiring: Senior/Junior Data Engineer!

📍 Remote or Local | Full-Time or Part-Time

At ResearchTrend.AI, we’re building a platform that connects researchers and AI engineers worldwide—helping them stay ahead with daily digests, insightful summaries, and interactive
Haritz Puerto @ NAACL 2025 🌵🇺🇸 (@haritzpuerto) 's Twitter Profile Photo

I will be in person at #NAACL2025 🌵🇺🇸 to present Scaling Up Membership Inference: When and How Attacks Succeed on LLMs. Come and say hi 👋 if you want to know how to proof if an LLM was trained on a data point!

Haritz Puerto @ NAACL 2025 🌵🇺🇸 (@haritzpuerto) 's Twitter Profile Photo

#NAACL2025 has started! I’ll be presenting my work at Parameter Lab about detecting pretraining data on Friday 🗓️ May 2, 11:00 AM - May 2, 12:30 PM 🗺️ Poster Session 8 - APP: NLP Applications Location: Hall 3 Work with Martin Gubri Sangdoo Yun Seong Joon Oh

Haritz Puerto @ NAACL 2025 🌵🇺🇸 (@haritzpuerto) 's Twitter Profile Photo

We see news like this from time to time and that’s why it’s vital to keep researching on tools to prove these cases! Our #NAACL2025 paper shows that with over 10k tokens, we can reliably detect whether a text was part of an LLM’s training data aclanthology.org/2025.findings-…

Parameter Lab (@parameterlab) 's Twitter Profile Photo

🧪 Our latest research: Does SEO boost the visibility of content in LLM-based conversational search? We present C-SEO Bench, a benchmark to evaluate conversational SEO strategies. Key takeaway: SEO methods that target LLM do not work. But surprisingly, traditional SEO is not

🧪 Our latest research: Does SEO boost the visibility of content in LLM-based conversational search?

We present C-SEO Bench, a benchmark to evaluate conversational SEO strategies.

Key takeaway: SEO methods that target LLM do not work. But surprisingly, traditional SEO is not