Simons Institute
Simons Institute
  • Видео 5 372
  • Просмотров 7 174 339
Illuminating protein space with generative models
John Ingraham (Generate Biomedicines)
simons.berkeley.edu/talks/john-ingraham-generate-biomedicines-2024-06-11
AI≡Science: Strengthening the Bond Between the Sciences and Artificial Intelligence
Proteins are the dominant functional molecules on earth, and yet our ability to leverage them to perform new functions that would be useful to people has largely relied on copying and paraphrasing nature. What does it take to build learning systems that can generalize to new parts of protein space? Amidst the flurry of activity in applying generative modeling to protein design in recent years, I will share some of our own experiences with building learning systems that can generalize, scale, and be p...
Просмотров: 722

Видео

Audio-visual self-supervised baby learning
Просмотров 56014 дней назад
Audio-visual self-supervised baby learning
The Complexity of Fermions in Quantum Information and Beyond | Richard M. Karp Distinguished Lecture
Просмотров 1,4 тыс.2 месяца назад
The Complexity of Fermions in Quantum Information and Beyond | Richard M. Karp Distinguished Lecture
Tensor Network Toy Models of Holographic Dualities: Locality and Causality
Просмотров 4202 месяца назад
Tensor Network Toy Models of Holographic Dualities: Locality and Causality
An Efficient Quantum Factoring Algorithm | Quantum Colloquium
Просмотров 1,8 тыс.3 месяца назад
An Efficient Quantum Factoring Algorithm | Quantum Colloquium
Logical Quantum Processor Based On Reconfigurable Atom Arrays | Quantum Colloquium
Просмотров 2,8 тыс.4 месяца назад
Logical Quantum Processor Based On Reconfigurable Atom Arrays | Quantum Colloquium
Algebraic Codes/Algorithms II
Просмотров 9785 месяцев назад
Algebraic Codes/Algorithms II
Data Structures and Optimization for Fast Algorithms
Просмотров 1,1 тыс.6 месяцев назад
Data Structures and Optimization for Fast Algorithms
Achieving Understandability and Privacy With Provenance-Based Explanations
Просмотров 3727 месяцев назад
Achieving Understandability and Privacy With Provenance-Based Explanations
Towards Interpretable Data Science
Просмотров 9427 месяцев назад
Towards Interpretable Data Science
Algorithmic Aspects of Semiring Provenance for Stratified Datalog
Просмотров 2967 месяцев назад
Algorithmic Aspects of Semiring Provenance for Stratified Datalog
Market Algorithms for Autobidding
Просмотров 8617 месяцев назад
Market Algorithms for Autobidding
Introducing RelationalAI
Просмотров 4867 месяцев назад
Introducing RelationalAI
Lemur: Integrating Large Language Models in Automated Program Verification
Просмотров 8337 месяцев назад
Lemur: Integrating Large Language Models in Automated Program Verification
Unique Challenges and Opportunities in Working with Residential Real Estate Data
Просмотров 1487 месяцев назад
Unique Challenges and Opportunities in Working with Residential Real Estate Data
Semiring Semantics
Просмотров 4507 месяцев назад
Semiring Semantics
Memory-Regret Tradeoff for Online Learning
Просмотров 1,1 тыс.8 месяцев назад
Memory-Regret Tradeoff for Online Learning
Nikhil Srivastava and Venkatesan Guruswami | Polylogues
Просмотров 1,2 тыс.8 месяцев назад
Nikhil Srivastava and Venkatesan Guruswami | Polylogues
Generating Approximate Ground States of Molecules Using Quantum Machine Learning
Просмотров 93710 месяцев назад
Generating Approximate Ground States of Molecules Using Quantum Machine Learning
Quantum-Classical Cross-Correlations and the Post-selection Problem
Просмотров 76010 месяцев назад
Quantum-Classical Cross-Correlations and the Post-selection Problem
Industry Applications of Hamiltonian Simulation and Beyond
Просмотров 1,1 тыс.10 месяцев назад
Industry Applications of Hamiltonian Simulation and Beyond
What Can I Do With a Noisy Quantum Computer?
Просмотров 1,2 тыс.10 месяцев назад
What Can I Do With a Noisy Quantum Computer?
Quantum Machine Learning in the NISQ Era
Просмотров 1,1 тыс.10 месяцев назад
Quantum Machine Learning in the NISQ Era
Noncommutativity and Rounding Schemes for Combinatorial Optimization Parts 1 & 2
Просмотров 91211 месяцев назад
Noncommutativity and Rounding Schemes for Combinatorial Optimization Parts 1 & 2
Some Remarks About Quantum and Classical Local Hamiltonian Optimization and SDP Rounding
Просмотров 81511 месяцев назад
Some Remarks About Quantum and Classical Local Hamiltonian Optimization and SDP Rounding
Multigroup Fairness | Polylogues
Просмотров 725Год назад
Multigroup Fairness | Polylogues
PCPs and Global Hyper-contractivity 1
Просмотров 623Год назад
PCPs and Global Hyper-contractivity 1
Constant-Round Arguments from One-Way Functions
Просмотров 615Год назад
Constant-Round Arguments from One-Way Functions
Alvy Ray Smith | Polylogues
Просмотров 739Год назад
Alvy Ray Smith | Polylogues
Efficient Quantum Gibbs Samplers | Quantum Colloquium
Просмотров 1,5 тыс.Год назад
Efficient Quantum Gibbs Samplers | Quantum Colloquium

Комментарии

  • @naderbenammar7097
    @naderbenammar7097 2 часа назад

    thank you 🙏🏻

  • @sm-pz8er
    @sm-pz8er 3 дня назад

    Very well explained. Thank you.

  • @calicoesblue4703
    @calicoesblue4703 3 дня назад

    Nice😎👍

  • @user-ic7ii8fs2j
    @user-ic7ii8fs2j 4 дня назад

    people of different cultures view the world in entirely different ways. It depends on culture, language, genetics etc. for example, people who speak Navajo have an entirely different way of perceiving reality and breaking it down into components than western English speaking people. A shaman would also see the world completely differently to a western man

  • @user-to9ub5xv7o
    @user-to9ub5xv7o 5 дней назад

    1. Introduction and Context (0:00 - 1:47) - Ilya Sutskever speaking at an event - Unable to discuss current technical work at OpenAI - Focused on AI alignment research recently - Will discuss old results from 2016 that influenced thinking on unsupervised learning 2. Fundamentals of Learning (1:47 - 5:51) - Questions why learning works at all mathematically - Discusses supervised learning theory (PAC learning, statistical learning theory) - Explains mathematical conditions for supervised learning success - Mentions importance of training and test distributions being the same 3. Unsupervised Learning Challenge (5:51 - 11:08) - Contrasts unsupervised learning with supervised learning - Questions why unsupervised learning works when optimizing one objective but caring about another - Discusses limitations of existing explanations for unsupervised learning 4. Distribution Matching Approach (11:08 - 15:32) - Introduces distribution matching as a guaranteed unsupervised learning method - Explains how it can work for tasks like machine translation - Links to Sutskever's independent discovery of this approach in 2015 5. Compression Theory of Unsupervised Learning (15:32 - 24:43) - Proposes compression as a framework for understanding unsupervised learning - Explains thought experiment of jointly compressing two datasets - Introduces concept of algorithmic mutual information - Links compression theory to prediction and machine learning algorithms 6. Kolmogorov Complexity and Neural Networks (24:43 - 30:52) - Explains Kolmogorov complexity as the ultimate compressor - Draws parallels between Kolmogorov complexity and neural networks - Discusses conditional Kolmogorov complexity for unsupervised learning - Links theory to practical neural network training 7. Empirical Validation: iGPT (30:52 - 35:46) - Describes iGPT as an expensive proof of concept for the compression theory - Explains application to image domain using next pixel prediction - Presents results showing improved unsupervised learning performance 8. Linear Representations and Open Questions (35:46 - 38:27) - Discusses mystery of why linear representations form in neural networks - Compares autoregressive models to BERT for linear representations - Speculates on reasons for differences in representation quality 9. Q&A Session (38:27 - 54:37) - Addresses questions on various topics including: - Comparison to other theories in cryptography - Limitations of the compression analogy - Relationship to energy-based models - Implications for supervised learning - Importance of autoregressive modeling - Relationship to model size and compression ability - Curriculum effects in neural network training

  • @Omeomeom
    @Omeomeom 6 дней назад

    this was a fire talk but the title could be more descriptive. First of all it's misleading and made it sound so daunting that I put it off but it was very informative

  • @englishredneckintexas6604
    @englishredneckintexas6604 6 дней назад

    This was fantastic. I actually understand these concepts now.

  • @mostafatouny8411
    @mostafatouny8411 8 дней назад

    What a humble and nice person Luca is

  • @mcasariego
    @mcasariego 8 дней назад

    What a great introduction to tomography and quantum estimation!!

  • @DistortedV12
    @DistortedV12 8 дней назад

    why should this be so profound and how is it relevant to real world?

    • @angelxmod3
      @angelxmod3 6 дней назад

      The title explains it "Platonic Representation". A platonic object exists outside of reality and reality is just a reflection of the perfect form. Think of a chair, it has 4 legs and a flat surface, it takes physical form and gets certain details but it is never a platonic chair. this hypothesis says that these models approach a platonic form representation that is evidence for the existence of platonic forms that exists outside of our reality.

    • @user-ic7ii8fs2j
      @user-ic7ii8fs2j 4 дня назад

      35:20 This hypothesis suggests that representations of the world are universal

  • @alidogramaci7468
    @alidogramaci7468 9 дней назад

    I am delighted to see such good work is being carried out at Columbia. One question I have as I am midway into your presentation: What you call the effective data set: is it unique? Can you build a confidence or credible set (region) around it?

  • @pensiveintrovert4318
    @pensiveintrovert4318 9 дней назад

    Maybe preparing first would help sounding like a lecturer instead of a highschooler spitting out random statements.

  • @T_SULTAN_
    @T_SULTAN_ 9 дней назад

    Fantastic lecture!

  • @OlutayoTella
    @OlutayoTella 10 дней назад

    I can’t stop talking about the amazing potentials of BDAG, I’m sure of my great yield

  • @hyperduality2838
    @hyperduality2838 10 дней назад

    Comparison, reflection, abstraction -- Immanuel Kant. Abstraction is the process of creating new concepts or ideas according to Immanuel Kant. Creating new concepts is a syntropic process -- teleological. Syntax is dual to semantics -- languages or communication, data. Large language models are using duality, if mathematics is a language then it is dual. Sense is dual to nonsense. Right is dual to wrong. "Only the Sith think in terms of absolutes" -- Obi Wan Kenobi. "Sith lords come in pairs" -- Obi Wan Kenobi. Concepts are dual to percepts" -- the mind duality of Immanuel Kant. The intellectual mind/soul (concepts) is dual to the sensory mind/soul (percepts) -- the mind duality of Thomas Aquinas. Your mind/soul converts perceptions or measurements into conceptions or ideas, mathematicians create new concepts all the time from their observations, intuitions or perceptions. The mind/soul is actually dual. Mind (syntropy) is dual to matter (entropy) -- Descartes or Plato's divided line. Your mind converts entropy or average information into syntropy or mutual information -- information (data) is dual. Concepts or ideas are therefore syntropic in form or structure. Teleological physics (syntropy) is dual to non teleological physics (entropy) -- physics is dual. Syntropy (prediction) is dual to increasing entropy -- the 4th law of thermodynamics! Duality creates reality! "Always two there are" -- Yoda. Physics is all about generalization or abstraction -- a syntropic process, teleological. Truth is dual to falsity -- propositional logic or Bayesian logic. Absolute truth is dual to relative truth -- Hume's fork. Truth is dual.

  • @GerardSans
    @GerardSans 13 дней назад

    The hypothesis for learning convergence is false. There’s no universal representation of knowledge. There’s though confirmation, data and anthropomorphic biases to be learned here. See umwelt and differing latent space representations even for exact same Transformers. That’s enough to refute these claims.

    • @GerardSans
      @GerardSans 13 дней назад

      Besides convergence is expected mathematically as these models are massive approximation functions. It’s only logical that they will convergence. It’s an approximation algorithm!

    • @GerardSans
      @GerardSans 13 дней назад

      It’s strange because this is so contradictory with day to day practices that shows a massive gap between theoric AI researchers and practitioners. Each model is trained from scratch not fine tuned from a common ancestor. This is not possible precisely because the latent space representations are not compatible even between different versions of the same model.

    • @jorgesimao4350
      @jorgesimao4350 9 дней назад

      The data is similar, so similar umwelt. Architecture and algorithm then converge to similar solutions, to "explain" the data.

  • @kellymoses8566
    @kellymoses8566 14 дней назад

    It makes perfect sense for different AIs to learn similar representations of the same reality. This is similar to how science works.

  • @drdca8263
    @drdca8263 14 дней назад

    So far I’m only 24 minutes into watching this, but, this seems like a really important idea? In addition to the clear practical application, it seems to me like this should also imply something about like, the theory of the development of scientific theories ? If we replace the ML model with, e.g. Newton’s law of universal gravitation (regarding “observations Isaac Newton knew about before publishing anything about the universal gravitation” as the “training set” used to produce that theory, when we consider the requirement that the gold-standard data be independent of the training set... uhh... I guess this should give a... hm, maybe this isn’t as applicable to this case as I thought. Still, seems very important!) Edit: I suppose the points at 33:00 - 35:15 should temper my uh, somewhat wild imaginings for how widely this could be applied .. and also she goes on to point out connections to previous literature dealing with somewhat similar things that I hadn’t at all heard of, I guess one can tell that I haven’t really studied statistics in much depth.. Nonetheless, I continue to be of the opinion that this is *very* cool.

  • @ATH42069
    @ATH42069 15 дней назад

    'we can talk offline about this' -the evolution of language in the information age

  • @mhmhmhmhmhmhmmhmh
    @mhmhmhmhmhmhmmhmh 15 дней назад

    έλα μωρε τώρε βρε

  • @andreapollini8821
    @andreapollini8821 15 дней назад

    Poor man

  • @ATH42069
    @ATH42069 15 дней назад

    @11:56 when the mouth detection algorithm isn't sure if it is the mouth

  • @winsomehax
    @winsomehax 16 дней назад

    This was very shortly before the OpenAI crisis when he tried to get altman fired.

    • @RickySupriyadi
      @RickySupriyadi 3 дня назад

      that is my friend, has something to do with national security issue which must be done, it is has to be done so in that period of time something can get secured. it's done and succeed, well sam going back isn't bad also, openAI now got a military general for their cybersecurity. well um... this kind of issue will not get any simpler it's will getting more complicated and might out of my reach for a solo...

  • @aaqib.s
    @aaqib.s 16 дней назад

    It is so wonderful to listen to Prof. Andrew!

  • @mooncop
    @mooncop 16 дней назад

    what is the opposite of confirmation bias?

  • @KakaSun0
    @KakaSun0 16 дней назад

    this could be sparks of SSI

  • @techteampxla2950
    @techteampxla2950 16 дней назад

    Dear Prof Hayden, I found you a few years when I was following ProfLenSus work. I am a huge fan now and follow you avidly. Thank you for your work and I assure you some day I see you becoming one of the greatest scientists of our time. Keep up the amazing contributions and we are appreciative!

  • @avimohan6594
    @avimohan6594 17 дней назад

    Rest in Peace, Luca.

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w 17 дней назад

    How about tinnitus?

  • @justinkeane193
    @justinkeane193 17 дней назад

    I was eight minutes in before I understood the title!

  • @drdca8263
    @drdca8263 17 дней назад

    Cool! Not that I know enough to have real reasons to have suspected one thing or another, but, I was rather surprised that the pattern had the “local order without global order” thing in contrast to when moving on a 2D surface. I would have guessed either “columns for horizontal positions” or a 3D global order. I guess the “putting hard balls into a region will tend to produce a local optimum for packing far from the global optimum” and the bit about the interaction potential thing producing similar patterns, maybe suggests something about how the cells come to fire when? Not the specifics maybe, but, something like, “there being cells that fire at location x causes other cells to be less likely to start firing at locations near x”? I think I’d like to read a bit about the relationship between the 2D random walks and the hexagonal grid. Did I hear correctly that that was connected to representation theory? Or maybe that was just saying representations more generally and not about representation theory. 18:39 : “they see!” :)

  • @Dela_bit
    @Dela_bit 20 дней назад

    This presale is ready to go, having just released their most recent keynote.

  • @SapienSpace
    @SapienSpace 21 день назад

    @ 52:55 It would be nice to know what that "Attention" block is, such as, if K-means clustering is utilized there... @43:14 That "advantage function" is interesting, would be great to have more details on it, it almost seems like two Fuzzy values. @ 34:05 That looks like a Euclidean distance (like K-means clustering).

  • @matanshtepel1230
    @matanshtepel1230 21 день назад

    Great talk! Thanks Irit :)

  • @christains7000
    @christains7000 22 дня назад

    “You know…” No, Scott, I have no freaking idea. You have alien intelligence.

  • @honkhonk8009
    @honkhonk8009 22 дня назад

    when tf did my feed go from neo nazi shit to this. I fw it though, didn't know what a Laplacian was until now. Always hated graph theory back in Uni cus I thought it was shallow. I wonder why we weren't taught this stuff?

  • @user-sx9lb1uv5m
    @user-sx9lb1uv5m 22 дня назад

    Thank you for the lecture

  • @Defi_dalton
    @Defi_dalton 23 дня назад

    BlockDAG has raised $49.7million in the presale That's a huge progress from this project

  • @doolittlegeorge
    @doolittlegeorge 23 дня назад

    *"set programming guide to stupid humans always the wholly totally and illegal War option every time all the time"* so who's the fucking idiot again? Sure isn't the hardly idiotic computer and computer people!

  • @fubiao9149
    @fubiao9149 24 дня назад

    how could one predict whether the system will reach some equilibrium state or frustrated state?

  • @yorailevi6747
    @yorailevi6747 27 дней назад

    If i ever visit the USA i must attend one of these lectures. Thanks for uploading! i have been watching so many interesting lectures since i subscribed

  • @AlMa-xi8wu
    @AlMa-xi8wu Месяц назад

    where lens?

  • @catalinamarquez6937
    @catalinamarquez6937 Месяц назад

    Fauci go homes ❤

  • @engeliebrand6491
    @engeliebrand6491 Месяц назад

    My goodness, (playful trumpet sound ) this is fantastic! Thanks so much for sharing this publically. Curious about what goes on in their minds!

  • @goodnessakinjo
    @goodnessakinjo Месяц назад

    BlockDAG is the next big thing

  • @Big_Bysen
    @Big_Bysen Месяц назад

    BlockDAG has been for year so they are not just new to the game

  • @eitanporat9892
    @eitanporat9892 Месяц назад

    clement is incredible :)

  • @axe863
    @axe863 Месяц назад

    Awesome video

  • @DibuJayeola
    @DibuJayeola Месяц назад

    BlockDAG's advancements in mining technology open up new possibilities for efficient and profitable cryptocurrency mining

  • @camerashysd7165
    @camerashysd7165 Месяц назад

    Wow guys could atleast put up some subtitles or something 😮