Dr Ayoub Bouguettaya (@a_bouguettaya) 's Twitter Profile
Dr Ayoub Bouguettaya

@a_bouguettaya

Social and Health Psychology is my passion. Technology is my hobby. Researcher in Internet, Health, and Society at @CedarsSinaiMed

ID: 927388293603344384

calendar_today06-11-2017 04:12:54

1,1K Tweet

242 Followers

443 Following

BensenHsu (@bensenhsu) 's Twitter Profile Photo

Nicholas Fabiano, MD This study investigates the neural correlates of empathy in people with major depressive disorder (MDD). Empathy is an important ability for successful social interactions, but it seems to be affected in individuals with MDD. The researchers aim to understand how empathy is

<a href="/NTFabiano/">Nicholas Fabiano, MD</a> This study investigates the neural correlates of empathy in people with major depressive disorder (MDD). Empathy is an important ability for successful social interactions, but it seems to be affected in individuals with MDD. The researchers aim to understand how empathy is
Taha Yasseri (@tahayasseri) 's Twitter Profile Photo

Three papers we have been working on for some time were finally published within a month. In this short interview, I explain how these three papers will shape the research agenda of our "Centre for Sociology of Humans and Machines" in the coming years. tcd.ie/research/resea…

Three papers we have been working on for some time were finally published within a month. In this short interview, I explain how these three papers will shape the research agenda of our "Centre for Sociology of Humans and Machines" in the coming years. 
tcd.ie/research/resea…
René Mõttus (@renemottus) 's Twitter Profile Photo

Now out in J Applied Psychology, a rigorous #IOPsych paper on occupational differences in #personality traits, based on 70K people, 250+ jobs and comprehensive trait assessments. You can also see which jobs match your traits (link 👇). 🧵 doi.org/10.1037/apl000… Kätlin Anni Uku Vainik

Now out in J Applied Psychology, a rigorous #IOPsych paper on occupational differences in #personality traits, based on 70K people, 250+ jobs and comprehensive trait assessments. You can also see which jobs match your traits (link 👇). 🧵
doi.org/10.1037/apl000…
<a href="/AnniKatlin/">Kätlin Anni</a> <a href="/ukuv/">Uku Vainik</a>
Ethan Mollick (@emollick) 's Twitter Profile Photo

I keep hearing from executives that they expect that a new generation of "AI natives" will show them how to use AI. I think this is a mistake: 1) Our research shows younger people do not really get AI or how to integrate into work 2) Experienced managers are often good prompters

I keep hearing from executives that they expect that a new generation of "AI natives" will show them how to use AI. I think this is a mistake:
1) Our research shows younger people do not really get AI or how to integrate into work
2) Experienced managers are often good prompters
Ethan Mollick (@emollick) 's Twitter Profile Photo

We wrote a piece on how to create prompts that create prompts, with a focus on how teachers can create reusable tools for tasks (we call these prompts blueprints) The full text of all the prompts is included, as are GPTs if you just want to try them out. hbsp.harvard.edu/inspiring-mind…

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

"Understanding LLMs from Scratch Using Middle School Math" Neural networks learn to predict text by converting words to numbers and finding patterns through attention mechanisms. So the network turns words into numbers, then use attention to decide what's important for

"Understanding LLMs from Scratch Using Middle School Math"

Neural networks learn to predict text by converting words to numbers and finding patterns through attention mechanisms.

So the network turns words into numbers, then use attention to decide what's important for
Ethan Mollick (@emollick) 's Twitter Profile Photo

A reason why prompt engineering is becoming less important for most people: larger models are less sensitive to prompt variations, including roles & goals, then smaller models. There are still cases where prompt engineering matters, but don’t sweat it too much for causal answers

A reason why prompt engineering is becoming less important for most people: larger models are less sensitive to prompt variations, including roles &amp; goals, then smaller models.

There are still cases where prompt engineering matters, but don’t sweat it too much for causal answers
Ethan Mollick (@emollick) 's Twitter Profile Photo

AI is good at pricing, so when GPT-4 was asked to help merchants maximize profits - and it did exactly that by secretly coordinating with other AIs to keep prices high! So... aligned for whom? The merchant? The consumer? Society? The results we get depend on how we define 'help'

AI is good at pricing, so when GPT-4 was asked to help merchants maximize profits - and it did exactly that by secretly coordinating with other AIs to keep prices high!

So... aligned for whom? The merchant? The consumer? Society? The results we get depend on how we define 'help'
Jay Van Bavel, PhD (@jayvanbavel) 's Twitter Profile Photo

Social identity complexity is positively related to universal cooperation (& negatively related to parochialism) Diverse identities allow people to assess situations from different perspectives, reducing bias & enhancing intergroup understanding + communication

Yury Zablotski (@yuzar___) 's Twitter Profile Photo

BOOTSTRAPPED REGRESSION #R ! đź’Ş via {tidymodels} 9 min Video for details: youtu.be/sjCxIHVGkdE #rstats #statistics #dataanalysis #stats #datascience #r4ds #tidymodels

BOOTSTRAPPED REGRESSION #R ! đź’Ş via {tidymodels} 

9 min Video for details: youtu.be/sjCxIHVGkdE

#rstats #statistics #dataanalysis #stats #datascience #r4ds #tidymodels
Joachim Schork (@joachimschork) 's Twitter Profile Photo

Compare multiple statistical models effortlessly with ggstats, a versatile extension package for ggplot2 that simplifies data visualization tasks. The ggcoef_compare() function allows you to compare the coefficients of several models side by side, providing an intuitive way to

Compare multiple statistical models effortlessly with ggstats, a versatile extension package for ggplot2 that simplifies data visualization tasks. The ggcoef_compare() function allows you to compare the coefficients of several models side by side, providing an intuitive way to
Steve Rathje (@steverathje2) 's Twitter Profile Photo

🚨New paper🚨 in British Journal of Psychology, led by James He with Felix Wallis & Andrés Gvirtz Large language models can simulate human collective behavior when they interact with each other in an artificial society.

🚨New paper🚨 in British Journal of Psychology, led by <a href="/james_k_he/">James He</a> with <a href="/WallisFelix/">Felix Wallis</a> &amp; <a href="/AndresGvirtz/">Andrés Gvirtz</a> 

Large language models can simulate human collective behavior when they interact with each other in an artificial society.
Hannah Overbye-Thompson (@hannah_overbye) 's Twitter Profile Photo

Training AI on data with small biases amplifies bias; humans then learn these biases. This was found in perceptual (emotion & motion) and social tasks (employment profiling) where bias was measured as deviating from ground truth - 50/50 stimuli, motion %, demographic distribution

Training AI on data with small biases amplifies bias; humans then learn these biases. This was found in perceptual (emotion &amp; motion) and social tasks (employment profiling) where bias was measured as deviating from ground truth - 50/50 stimuli, motion %, demographic distribution
Danish Pruthi (@danish037) 's Twitter Profile Photo

Remember this study about how LLM generated research ideas were rated to be more novel than expert-written ones? We find a large fraction of such LLM generated proposals (≥ 24%) to be skillfully plagiarized, bypassing inbuilt plagiarism checks and unsuspecting experts. A 🧵

Ethan Mollick (@emollick) 's Twitter Profile Photo

This paper is even more insane to read than the thread. Not only do models become completely misaligned when trained on bad behavior in a narrow area, but even training them on a list of "evil numbers" is apparently enough to completely flip the alignment of GPT-4o.

This paper is even more insane to read than the thread. Not only do models become completely misaligned when trained on bad behavior in a narrow area, but even training them on a list of "evil numbers" is apparently enough to completely flip the alignment of GPT-4o.
Eiko Fried (@eikofried) 's Twitter Profile Photo

6/6 Abstract screenshotted, preprint on the OSF: osf.io/preprints/psya… I've learned a lot from Emma across various projects we've been working on in the last year; you can check out her work here: emmakbridger.github.io

6/6 Abstract screenshotted, preprint on the OSF: 
osf.io/preprints/psya…

I've learned a lot from Emma across various projects we've been working on in the last year; you can check out her work here: emmakbridger.github.io