John Langford (@johnclangford) 's Twitter Profile
John Langford

@johnclangford

Solving Machine Learning at Microsoft in New York.
icml.cc pandemic past president.
vowpalwabbit.org makes RL real.
hunch.net for thinking out loud.

ID: 1210596212140892160

linkhttp://hunch.net/~jl calendar_today27-12-2019 16:21:13

239 Tweet

9,9K Followers

41 Following

John Langford (@johnclangford) 's Twitter Profile Photo

New reqs for low to high level researcher positions: jobs.careers.microsoft.com/global/en/job/… , jobs.careers.microsoft.com/global/en/job/…, jobs.careers.microsoft.com/global/en/job/…, jobs.careers.microsoft.com/global/en/job/…, with postdocs from Akshay and Miro Dudik x.com/MiroDudik/stat… . Please apply or pass to those who may :-)

John Langford (@johnclangford) 's Twitter Profile Photo

From AI Frontiers, Yadong++ have released Omniparser (microsoft.github.io/OmniParser/) which parses screens better than vision models. The code is open source and the model is on hugging face huggingface.co/microsoft/Omni… .

John Langford (@johnclangford) 's Twitter Profile Photo

A general purpose system of agents which roughly doubles perf over GPT4 models on computational productivity tasks. The blog post (microsoft.com/en-us/research… ) has some nice demos.

Ahmed Awadallah (@ahmedhawadallah) 's Twitter Profile Photo

Our #Microsoft AI Frontiers team is at #NeurIPS2024 this week with several sessions on AutoGen , #Eureka, #OmniParser and #AgentInstruct #AutoGen 0.4: Redefining Agentic AI Systems - Workshop on Tuesday@12pm - Microsoft Booth: Tue@3:30pm, Wed@9:00am and Thu@3:00pm #Trace:

Our #Microsoft AI Frontiers team is at #NeurIPS2024  this week with several sessions on <a href="/pyautogen/">AutoGen</a>  , #Eureka, #OmniParser and #AgentInstruct

#AutoGen 0.4: Redefining Agentic AI Systems 
- Workshop on Tuesday@12pm
- Microsoft Booth: Tue@3:30pm, Wed@9:00am and Thu@3:00pm

#Trace:
Peter Lee (@peteratmsr) 's Twitter Profile Photo

🚀 Phi-4 is here! A small language model that performs as well as (and often better than) large models on certain types of complex reasoning tasks such as math. Useful for us in Microsoft Research, and available now for all researcher on the Azure AI Foundry! aka.ms/phi4blog

🚀 Phi-4 is here! A small language model that performs as well as (and often better than) large models on certain types of complex reasoning tasks such as math. Useful for us in <a href="/MSFTResearch/">Microsoft Research</a>, and available now for all researcher on the Azure AI Foundry! aka.ms/phi4blog
John Langford (@johnclangford) 's Twitter Profile Photo

This is a deep re-write of autogen creating modularity which in terms enables, extension, experimentation, and use over the long haul. The event-driven core is probably familiar to many folks looking at high use systems.

John Langford (@johnclangford) 's Twitter Profile Photo

It turns out there is an objective-based approach for coalescing states from sequential observations, and doing so allows you to solve new problems.

John Langford (@johnclangford) 's Twitter Profile Photo

A new post: Headroom for AI Development hunch.net/?p=13763046 . It's quite interesting to compare biological and silicon capabilities.

John Langford (@johnclangford) 's Twitter Profile Photo

I still support Ukraine personally. All the arguments I've seen for betraying Ukraine ($, peace, nuclear war, stalemate) are bogus after looking into the details.

John Langford (@johnclangford) 's Twitter Profile Photo

Having tied parameters of forward and backward seems like a potentially good idea that we did not do. I'm skeptical that this alone would solve stargraph since the structure to create belief states does not exist. But please go ahead and experiment.

Riashat Islam (@riashatislam) 's Twitter Profile Photo

Agent Objectives will be at the core in the upcoming "era of experience" for working with LLMs - and BST for learning compact belief state representations is an early step towards that. Chat with the authors at #ICLR2025 to hear exciting things we are doing with it!

John Langford (@johnclangford) 's Twitter Profile Photo

My understanding is that Phi-4-reasoning is functioning as well as other reasoning models which are roughly an order of magnitude larger.