Ndix Makondo🇿🇦🇯🇵 (@collinsoptix) 's Twitter Profile
Ndix Makondo🇿🇦🇯🇵

@collinsoptix

AI and robotics, seeking to uplift Africa through technology.

ID: 84659526

calendar_today23-10-2009 18:46:54

349 Tweet

109 Followers

381 Following

Asim Munawar (@asimunawar) 's Twitter Profile Photo

Join us for IBM Neuro-Symbolic AI Workshop 2022 on the 18th and 19th of January (8:30am-7:30pm ET). Register for free at: ibm.biz/ns-wkshp #AI #IBM

Join us for IBM Neuro-Symbolic AI Workshop 2022 on the 18th and 19th of January (8:30am-7:30pm ET).
Register for free at: ibm.biz/ns-wkshp
#AI #IBM
JB Alayrac (@jalayrac) 's Twitter Profile Photo

It is great to see the excitement about 🦩! As shown in different examples during the last few days, interacting with 🦩 has been quite fun, unique and sometimes mind blowing. However,🦩 has clear limitations as detailed in this 🧵! 1/11

François Chollet (@fchollet) 's Twitter Profile Photo

In ML there are only very simple concepts. Understanding ML is within reach for almost anyone with programming experience, if you explain it in terms they can understand (e.g. code and figures), and not using unnecessarily opaque notation they can't parse.

Sasha Rush (@srush_nlp) 's Twitter Profile Photo

It's a joke that all NLP talks must include this graph. But if you are a student it is a bit intimidating. How can you become an expert in where we are going if you can barely run BERT? I asked twitter for specific advice that you might focus on:

It's a joke that all NLP talks must include this graph. 

But if you are a student it is a bit intimidating. How can you become an expert in where we are going if you can barely run BERT?  

I asked twitter for specific advice that you might focus on:
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

Aparna Dhinakaran Gloria Felicia It's a great question. I roughly think of finetuning as analogous to expertise in people: - Describe a task in words ~= zero-shot prompting - Give examples of solving task ~= few-shot prompting - Allow person to practice task ~= finetuning With this analogy in mind, it's

<a href="/aparnadhinak/">Aparna Dhinakaran</a> <a href="/gloriafelicia_/">Gloria Felicia</a> It's a great question. I roughly think of finetuning as analogous to expertise in people: 
- Describe a task in words ~= zero-shot prompting 
- Give examples of solving task ~= few-shot prompting
- Allow person to practice task ~= finetuning

With this analogy in mind, it's
elvis (@omarsar0) 's Twitter Profile Photo

Just In! DeepMind presents AlphaDev, a deep reinforcement learning agent which discovers faster sorting algorithms from scratch. The algorithms outperform previously known human benchmarks and have been integrated into the LLVM C++ library. nature.com/articles/s4158…

Just In! DeepMind presents AlphaDev, a deep reinforcement learning agent which discovers faster sorting algorithms from scratch.
 
The algorithms outperform previously known human benchmarks and have been integrated into the LLVM C++ library. 

nature.com/articles/s4158…
Celia Cintas (@rtfmcelia) 's Twitter Profile Photo

📣 if you’re attending Deep Learning Indaba and you work on #TrustML, submit an extended abstract at our workshop (co-organized w/ Aisha Alaagib Mbangula Lameck Amugongo #BlackLivesMatter Siobhan Mackenzie Hall, Tejumade & Nathi 🤩 Are you working on audit techniques ⚖️? Robust ML🤖? Privacy 🔐? ➡️ trustaideepindaba.github.io/2023/06/07/cfp/

elvis (@omarsar0) 's Twitter Profile Photo

This is one of the most comprehensive survey papers on LLMs! They recently updated it. It includes over 600 references, LLM collection, useful prompting tips, evaluation of capabilities, a GitHub repo, and more. paper: arxiv.org/abs/2303.18223 repo: github.com/RUCAIBox/LLMSu…

This is one of the most comprehensive survey papers on LLMs!

They recently updated it. 

It includes over 600 references, LLM collection, useful prompting tips, evaluation of capabilities, a GitHub repo, and more.

paper: arxiv.org/abs/2303.18223
repo: github.com/RUCAIBox/LLMSu…
elvis (@omarsar0) 's Twitter Profile Photo

🎓Stanford XCS224U: Natural Language Understanding (2023) It's great to see a new iteration of one of my favorite courses on natural language understanding. Covers topics such as contextual word representations, information retrieval, in-context learning, behavioral evaluation

🎓Stanford XCS224U: Natural Language Understanding (2023)

It's great to see a new iteration of one of my favorite courses on natural language understanding. 

Covers topics such as contextual word representations, information retrieval, in-context learning, behavioral evaluation
elvis (@omarsar0) 's Twitter Profile Photo

Anti-hype LLM Reading List This is actually a really good list of papers and reading materials on LLMs. Love the curation by vicki 🦋. gist.github.com/veekaybee/be37…

Anti-hype LLM Reading List

This is actually a really good list of papers and reading materials on LLMs. Love the curation by <a href="/vboykis/">vicki 🦋</a>.

gist.github.com/veekaybee/be37…
elvis (@omarsar0) 's Twitter Profile Photo

A Survey on LLM-based Autonomous Agents Great repository containing a collection of papers on LLM-based autonomous agents. The survey paper for this came out a few days ago as well. repo: github.com/Paitesanshi/LL… paper: arxiv.org/abs/2308.11432

A Survey on LLM-based Autonomous Agents

Great repository containing a collection of papers on LLM-based autonomous agents.  

The survey paper for this came out a few days ago as well. 

repo: github.com/Paitesanshi/LL…

paper: arxiv.org/abs/2308.11432
elvis (@omarsar0) 's Twitter Profile Photo

An Open LLM and How to Train It with a $100K Budget This paper introduces a new open LLM called FLM-101B. Claims that "the LLM with 101B parameters and 0.31TB tokens can be trained on a $100K budget." If true, this is amazing! Performance is okay on some of the tasks but I

An Open LLM and How to Train It with a $100K Budget

This paper introduces a new open LLM called FLM-101B.

Claims that  "the LLM with 101B parameters and 0.31TB tokens can be trained on a $100K budget."

If true, this is amazing! Performance is okay on some of the tasks but I
elvis (@omarsar0) 's Twitter Profile Photo

ScrapeGraphAI: You Only Scrape Once Neat little web scraping tool powered by LLMs. LLMs are powerful information extractors so it's not surprising to see the popularity of this Python library and many others. It works with ollama and other LLM providers.

ScrapeGraphAI: You Only Scrape Once

Neat little web scraping tool powered by LLMs. 

LLMs are powerful information extractors so it's not surprising to see the popularity of this Python library and many others.

It works with ollama and other LLM providers.
elvis (@omarsar0) 's Twitter Profile Photo

I've delivered my Advanced Prompting for LLMs training to almost 400 people over the last year. Here are some of the main themes people learn about: - applying advanced prompting techniques like chain-of-thought - best practices to improve the reliability, robustness, and

I've delivered my Advanced Prompting for LLMs training to almost 400 people over the last year.

Here are some of the main themes people learn about:

- applying advanced prompting techniques like chain-of-thought
- best practices to improve the reliability, robustness, and
elvis (@omarsar0) 's Twitter Profile Photo

CS388: Natural Language Processing Great list of byte-sized lectures on NLP and LLMs. It provides nice summaries of modern NLP topics and recent ones like RLHF, instruction-tuning, few-shot prompting, chain-of-thought, and more. A great resource to catch up on the space.

CS388: Natural Language Processing

Great list of byte-sized lectures on NLP and LLMs.

It provides nice summaries of modern NLP topics and recent ones like RLHF, instruction-tuning, few-shot prompting, chain-of-thought, and more.

A great resource to catch up on the space.
Fei-Fei Li (@drfeifei) 's Twitter Profile Photo

This is a really good summary of how the breakthroughs in Neural Network (AlexNet), Big Data (ImageNet) and GPUs led to the birth of modern AI and computer vision. Thank you Ksenia Se and your TuringPost !

Rohan Paul (@rohanpaul_ai) 's Twitter Profile Photo

MapReduce meets LLMs: Divide-and-conquer approach lets regular LLMs process 100x longer documents than their context limit Using MapReduce principles, small-context LLMs now handle million-token documents efficiently. Original Problem 🔍: LLMs struggle to process extremely

MapReduce meets LLMs: Divide-and-conquer approach lets regular LLMs process 100x longer documents than their context limit

Using MapReduce principles, small-context LLMs now handle million-token documents efficiently.

Original Problem 🔍:

LLMs struggle to process extremely