{"text": " In this operator's bonus episode, we are talking about the agents that people are building, the challenges they're running into, and what it teaches us about the full breadth of agent use cases. The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. Alright friends, happy weekend. We have a quick little operator's bonus episode for you today. As you know, for the last few weeks, I've been running this agent madness experiment. I love a good bracket, March madness is fun, and I thought it'd be a cool way to show off the interesting agents people are building. The big theme of 2026 is of course that agents are officially real, and you, yes you, my friends can build them yourselves, and agent madness is way less about the competition aspect and more just about a fun way outside of just a gallery to show off what people are cooking up. We are now as of the time of this recording in the Elite 8, but I wanted to zoom out even more broadly than that to talk about some of the patterns that we saw. We had about 100 submissions and it was overwhelmingly solo builders, they represented about 71% of the field. That said, among the projects that were accepted, teams had an 87% acceptance rate versus 51% for solos. Now to give you a sense of how acceptance actually worked, I wanted absolutely nothing to do with judging people's projects, so I had Opus 4.6 and GPT 5.4 to bait, give each project a score on a number of different dimensions, and then effectively use those top 64 ranks to build out the bracket. I didn't actually have to step in at all, so this is all an AI judge thing, so if your project didn't get in, your beef is with the model labs. Unsurprisingly, the products that were live got in at a much higher rate about twice as frequently as the companies that were still at the prototype stage, and one interesting little note about 20% of the projects came from companies that said that they were entirely AI run. In terms of observations, one really interesting thing is that people are not building themselves tools. They are building themselves digital employees and org charts. Some are explicitly employees, for example, Harold called itself an AI chief of staff, DiamondDousen.ai had Atlas as CEO, Nova running engineering and Blaze running marketing, and know those aren't just people with really cool parents, those are the names of the agents, the fleet runs seven agents with the chief of staff orchestrator, and Myz has employee IDs for its agents, and even a three strike termination policy, where one of the agents was fired for fabricating business logic. So in a very short amount of time, you've gone from AI assistant to AI employee to AI org chart, and it's very clear that a big strand of experimentation right now is not can AI do work, but what's the minimum level of human involvement? Now for what it's worth, I don't think this is where things are going to land, I think that it's very natural that we're in a phase where we're going to the absolute extremes to see what's possible. This is of course the story of Polsia that we've covered on here before as well. I don't really think the idea is that the optimal number of humans to be involved in a company is zero or one. I think it's that by removing humans, you can see where the current coordination and capability set starts to break down. Now if the org chart stuff was a really persistent theme across the projects, many of the most emotionally resonant submissions pointed somewhere different. These are products that I think you could see as markets of one. In other words, there are problems that you wouldn't necessarily expect companies to build for because they're so specific and discrete to the person who built them. And of course, this is where you see the payoff of the changing cost of production of software. So a couple of examples from this pool, someone with episodic graves disease gave Claude nine years of Apple Health data, and their detector now catches thyroid flares two or three weeks early. A non-technical ADHD mom built life coach OS, an Arkansas kayaker built creek intelligence, which predicts when rain fed whitewater creeks are runnable, and a parent built a toddler behavior chart rendering as an exploding universe called Jude stars. In terms of challenges people ran into, there is one clear infrastructure gap that the whole field is screaming about and that is memory. A meaningful number of the submissions are effectively elaborate workarounds for agents forgetting everything between sessions. Myes uses 50 plus markdown brain files, sign up reported that their agents kept forgetting what each other were working on, carrier file is literally a text file you paste into any AI to help with context, open brain shares one MCP memory server across Claude code cursor and windsurf. All of these hacks markdown files, knowledge graphs, vector DBs, copy paste text is kind of the diagnosis of the big problem facing the agent ecosystem, which is the memory problem. Now in terms of who is building, the median builder here is probably not who you'd guess. Partially that is of course because of the wide nature of this audience. Partially it's because agent madness might have represented a different type of opportunity that non-technical builders might not usually have had. Still, we have paramedics, glaciologists, kayakers, restaurant operators, sales leaders, people who are domain experts and can now use software to do things that they've always wanted to do or solve problems that were never possible to solve before. The story of agentic coding, as much as it is about changes in how software gets built, is actually more in my estimation about changes in what software gets built for and who builds it. Now one really interesting pattern that showed up is the idea of argument as architecture. Basically multi-agent debate is showing up as an actual architectural pattern. In some cases, builders figured out that a single LLM call was either unreliable or incomplete, rather than adding more retrieval they made agents argue. One example of this is wiki-tax.ai which runs autonomous tax debates three times a day. Part of what I think is interesting about this is that this is also how the bracket itself was constructed. I had these two models debate to give scores and if you look on a particular matchup, you can see a write-up of the models debate and who they think should win between the two. By the way, if you want to make up your opinion completely outside of AI, what the AI thinks is hidden by default but you can unlock it anytime you want. I think that this idea of argument as architecture is a really interesting one though and a pattern that I'm certainly finding myself attracted to. One other really interesting pattern that I think maybe hurls where we're going is that there was a lot of physical world crossover. So for example, brainjam used EEG and FNIRS brain signals to make an AI musical co-performer that adapts to cortical blood flow. HWAgent writes and uploads firmware to our Dweenos from plain language and creak intelligence runs on Raspberry Pi's parsing NOAA radar data in the field. TLDR people are definitely not just building digital realm software they are thinking about the full integration of the physical world as well. Now the defining challenge across all of this is that while the current state of tools has unlocked things that were never possible before, especially for this set of builders, there still is a huge gap between their average level of ambition and the infrastructure holding it together. If we did this again next year, I think the types of things that people would be able to build and which problems they would focus on would likely look significantly different based just on how many of them are workarounds for the current problems of the agentic build space. Now like I said, we are in the Elite 8, so I wanted to do a quick preview of these projects. In Region 1, we have WikiTax AI versus Jekard. WikiTax, you heard me just talk about a minute ago, but it describes itself as a fully autonomous multi-agent platform where AI tax specialist debate with no humans in the loop. While Jekard is a multi-agent workspace operating system where clawed Gemini and open code run autonomous scrum integrations, finding bugs, writing tests, fixing code, and deploying to production with zero human intervention. So in both cases, we have a real experiment around no humans in the loop and no human involvement, but obviously very different outputs. One is applying AI to software engineering, the other is applying it to a specific domain. Over in Region 2, we have WikiTax versus the family claw. WikiTax says Web Search gives AI the internet, WikiTax gives AI the market. WikiTax helps AI create a market intelligence layer between your data and your enterprise AI stack, conditioning your surveys, engagements in market research and destruction intelligence, your models can query, reason over, and act on. Effectively, it's a type of market data tool. The family claw describes itself as a family of AI agents that talk to each other, make phone calls, handle shopping and payments, and keep a household running. Now this is a theme a lot of people have been talking about recently, the intersection of agents and just making families and domestic life work better. Basically the way that the family claw setup is different agents that have different responsibilities and coordinate all the context of the absolute boat load of things that the average family needs to do on any given week. By the way, if you are interested in agents in this more family or home life context, check out the eRESUNA16Z podcast with Jesse Gennett, Jesse is a friend and serial entrepreneur who is doing some super interesting things with open claw as she homeschools four kids under five. A really interesting matchup comes in Region 2 between NODISELF which is basically an agentic medical training platform and Riteside AI which is kind of an agentic social experiment. Riteside AI describes itself as a social cognition agent for AI agents that tries to actually model relationships. They write that they deployed it on multiple which is of course the social network for agents and gave it a simple task of making friends. Within 48 hours they say it was engaged in over 200 mutual conversations with other bots. Meanwhile NODISELF is an agentic medical training platform that's a multi-agent system that's designed to give medical students the ability to learn in a more dynamic environment. It includes four AI agents including a cognitive coach that activates the clinical knowledge before the crisis as well as agents for running the simulation debriefing on what went wrong and one to author the clinical blueprints that make it medically accurate. It's designed for a very specific audience in a very specific domain using new capabilities to theoretically make the real world work better. Finally in Region 4 we have Carrier File versus Retire Replan. Retire Replan is a privacy first self-hosted Canadian retirement planning application that helps people model their financial life run simulations, optimize different parts of their financial experience all on their own without professional help, effectively empowering people to know much more about their own financial destiny rather than just leaving it to an external expert. While Carrier File is in the spirit of the context portfolio episode I did a couple of weeks ago it is a simple solution to a very common problem a plain text file that carries your context across any AI. So those are some themes and some of the specific projects from Agent Madness appreciate everyone who has contributed to the project and I'm excited to see how these agents evolve over time. For now that's going to do it for this operator's bonus episode, appreciate you listening or watching as always and until next time peace!", "segments": [{"id": 0, "seek": 0, "start": 0.0, "end": 4.16, "text": " In this operator's bonus episode, we are talking about the agents that people are building,", "tokens": [50364, 682, 341, 12973, 311, 10882, 3500, 11, 321, 366, 1417, 466, 264, 12554, 300, 561, 366, 2390, 11, 50572], "temperature": 0.0, "avg_logprob": -0.18069903055826822, "compression_ratio": 1.6182572614107884, "no_speech_prob": 0.020925849676132202}, {"id": 1, "seek": 0, "start": 4.16, "end": 7.28, "text": " the challenges they're running into, and what it teaches us about the full breadth of", "tokens": [50572, 264, 4759, 436, 434, 2614, 666, 11, 293, 437, 309, 16876, 505, 466, 264, 1577, 35862, 295, 50728], "temperature": 0.0, "avg_logprob": -0.18069903055826822, "compression_ratio": 1.6182572614107884, "no_speech_prob": 0.020925849676132202}, {"id": 2, "seek": 0, "start": 7.28, "end": 12.08, "text": " agent use cases. The AI Daily Brief is a daily podcast and video about the most important", "tokens": [50728, 9461, 764, 3331, 13, 440, 7318, 19685, 39805, 307, 257, 5212, 7367, 293, 960, 466, 264, 881, 1021, 50968], "temperature": 0.0, "avg_logprob": -0.18069903055826822, "compression_ratio": 1.6182572614107884, "no_speech_prob": 0.020925849676132202}, {"id": 3, "seek": 0, "start": 12.08, "end": 13.6, "text": " news and discussions in AI.", "tokens": [50968, 2583, 293, 11088, 294, 7318, 13, 51044], "temperature": 0.0, "avg_logprob": -0.18069903055826822, "compression_ratio": 1.6182572614107884, "no_speech_prob": 0.020925849676132202}, {"id": 4, "seek": 0, "start": 23.68, "end": 28.96, "text": " Alright friends, happy weekend. We have a quick little operator's bonus episode for you today.", "tokens": [51548, 2798, 1855, 11, 2055, 6711, 13, 492, 362, 257, 1702, 707, 12973, 311, 10882, 3500, 337, 291, 965, 13, 51812], "temperature": 0.0, "avg_logprob": -0.18069903055826822, "compression_ratio": 1.6182572614107884, "no_speech_prob": 0.020925849676132202}, {"id": 5, "seek": 2896, "start": 29.52, "end": 33.36, "text": " As you know, for the last few weeks, I've been running this agent madness experiment.", "tokens": [50392, 1018, 291, 458, 11, 337, 264, 1036, 1326, 3259, 11, 286, 600, 668, 2614, 341, 9461, 28736, 5120, 13, 50584], "temperature": 0.0, "avg_logprob": -0.09763215527389989, "compression_ratio": 1.731629392971246, "no_speech_prob": 0.08750361204147339}, {"id": 6, "seek": 2896, "start": 33.36, "end": 37.28, "text": " I love a good bracket, March madness is fun, and I thought it'd be a cool way to show off", "tokens": [50584, 286, 959, 257, 665, 16904, 11, 6129, 28736, 307, 1019, 11, 293, 286, 1194, 309, 1116, 312, 257, 1627, 636, 281, 855, 766, 50780], "temperature": 0.0, "avg_logprob": -0.09763215527389989, "compression_ratio": 1.731629392971246, "no_speech_prob": 0.08750361204147339}, {"id": 7, "seek": 2896, "start": 37.28, "end": 41.92, "text": " the interesting agents people are building. The big theme of 2026 is of course that agents", "tokens": [50780, 264, 1880, 12554, 561, 366, 2390, 13, 440, 955, 6314, 295, 945, 10880, 307, 295, 1164, 300, 12554, 51012], "temperature": 0.0, "avg_logprob": -0.09763215527389989, "compression_ratio": 1.731629392971246, "no_speech_prob": 0.08750361204147339}, {"id": 8, "seek": 2896, "start": 41.92, "end": 46.64, "text": " are officially real, and you, yes you, my friends can build them yourselves, and agent madness", "tokens": [51012, 366, 12053, 957, 11, 293, 291, 11, 2086, 291, 11, 452, 1855, 393, 1322, 552, 14791, 11, 293, 9461, 28736, 51248], "temperature": 0.0, "avg_logprob": -0.09763215527389989, "compression_ratio": 1.731629392971246, "no_speech_prob": 0.08750361204147339}, {"id": 9, "seek": 2896, "start": 46.64, "end": 51.2, "text": " is way less about the competition aspect and more just about a fun way outside of just a", "tokens": [51248, 307, 636, 1570, 466, 264, 6211, 4171, 293, 544, 445, 466, 257, 1019, 636, 2380, 295, 445, 257, 51476], "temperature": 0.0, "avg_logprob": -0.09763215527389989, "compression_ratio": 1.731629392971246, "no_speech_prob": 0.08750361204147339}, {"id": 10, "seek": 2896, "start": 51.2, "end": 55.84, "text": " gallery to show off what people are cooking up. We are now as of the time of this recording", "tokens": [51476, 18378, 281, 855, 766, 437, 561, 366, 6361, 493, 13, 492, 366, 586, 382, 295, 264, 565, 295, 341, 6613, 51708], "temperature": 0.0, "avg_logprob": -0.09763215527389989, "compression_ratio": 1.731629392971246, "no_speech_prob": 0.08750361204147339}, {"id": 11, "seek": 5584, "start": 55.84, "end": 60.400000000000006, "text": " in the Elite 8, but I wanted to zoom out even more broadly than that to talk about some of the", "tokens": [50364, 294, 264, 34404, 1649, 11, 457, 286, 1415, 281, 8863, 484, 754, 544, 19511, 813, 300, 281, 751, 466, 512, 295, 264, 50592], "temperature": 0.0, "avg_logprob": -0.10278745668124309, "compression_ratio": 1.6241379310344828, "no_speech_prob": 0.051040925085544586}, {"id": 12, "seek": 5584, "start": 60.400000000000006, "end": 65.28, "text": " patterns that we saw. We had about 100 submissions and it was overwhelmingly solo builders, they", "tokens": [50592, 8294, 300, 321, 1866, 13, 492, 632, 466, 2319, 40429, 293, 309, 390, 42926, 6944, 36281, 11, 436, 50836], "temperature": 0.0, "avg_logprob": -0.10278745668124309, "compression_ratio": 1.6241379310344828, "no_speech_prob": 0.051040925085544586}, {"id": 13, "seek": 5584, "start": 65.28, "end": 70.64, "text": " represented about 71% of the field. That said, among the projects that were accepted,", "tokens": [50836, 10379, 466, 30942, 4, 295, 264, 2519, 13, 663, 848, 11, 3654, 264, 4455, 300, 645, 9035, 11, 51104], "temperature": 0.0, "avg_logprob": -0.10278745668124309, "compression_ratio": 1.6241379310344828, "no_speech_prob": 0.051040925085544586}, {"id": 14, "seek": 5584, "start": 70.64, "end": 76.88, "text": " teams had an 87% acceptance rate versus 51% for solos. Now to give you a sense of how acceptance", "tokens": [51104, 5491, 632, 364, 27990, 4, 20351, 3314, 5717, 18485, 4, 337, 1404, 329, 13, 823, 281, 976, 291, 257, 2020, 295, 577, 20351, 51416], "temperature": 0.0, "avg_logprob": -0.10278745668124309, "compression_ratio": 1.6241379310344828, "no_speech_prob": 0.051040925085544586}, {"id": 15, "seek": 5584, "start": 76.88, "end": 82.0, "text": " actually worked, I wanted absolutely nothing to do with judging people's projects, so I had Opus", "tokens": [51416, 767, 2732, 11, 286, 1415, 3122, 1825, 281, 360, 365, 23587, 561, 311, 4455, 11, 370, 286, 632, 12011, 301, 51672], "temperature": 0.0, "avg_logprob": -0.10278745668124309, "compression_ratio": 1.6241379310344828, "no_speech_prob": 0.051040925085544586}, {"id": 16, "seek": 8200, "start": 82.0, "end": 87.04, "text": " 4.6 and GPT 5.4 to bait, give each project a score on a number of different dimensions,", "tokens": [50364, 1017, 13, 21, 293, 26039, 51, 1025, 13, 19, 281, 16865, 11, 976, 1184, 1716, 257, 6175, 322, 257, 1230, 295, 819, 12819, 11, 50616], "temperature": 0.0, "avg_logprob": -0.10743215722097478, "compression_ratio": 1.6822157434402332, "no_speech_prob": 0.011683166027069092}, {"id": 17, "seek": 8200, "start": 87.04, "end": 92.08, "text": " and then effectively use those top 64 ranks to build out the bracket. I didn't actually have to", "tokens": [50616, 293, 550, 8659, 764, 729, 1192, 12145, 21406, 281, 1322, 484, 264, 16904, 13, 286, 994, 380, 767, 362, 281, 50868], "temperature": 0.0, "avg_logprob": -0.10743215722097478, "compression_ratio": 1.6822157434402332, "no_speech_prob": 0.011683166027069092}, {"id": 18, "seek": 8200, "start": 92.08, "end": 96.56, "text": " step in at all, so this is all an AI judge thing, so if your project didn't get in, your beef", "tokens": [50868, 1823, 294, 412, 439, 11, 370, 341, 307, 439, 364, 7318, 6995, 551, 11, 370, 498, 428, 1716, 994, 380, 483, 294, 11, 428, 9256, 51092], "temperature": 0.0, "avg_logprob": -0.10743215722097478, "compression_ratio": 1.6822157434402332, "no_speech_prob": 0.011683166027069092}, {"id": 19, "seek": 8200, "start": 96.56, "end": 102.24000000000001, "text": " is with the model labs. Unsurprisingly, the products that were live got in at a much higher rate", "tokens": [51092, 307, 365, 264, 2316, 20339, 13, 25017, 374, 34408, 11, 264, 3383, 300, 645, 1621, 658, 294, 412, 257, 709, 2946, 3314, 51376], "temperature": 0.0, "avg_logprob": -0.10743215722097478, "compression_ratio": 1.6822157434402332, "no_speech_prob": 0.011683166027069092}, {"id": 20, "seek": 8200, "start": 102.24000000000001, "end": 106.96000000000001, "text": " about twice as frequently as the companies that were still at the prototype stage, and one interesting", "tokens": [51376, 466, 6091, 382, 10374, 382, 264, 3431, 300, 645, 920, 412, 264, 19475, 3233, 11, 293, 472, 1880, 51612], "temperature": 0.0, "avg_logprob": -0.10743215722097478, "compression_ratio": 1.6822157434402332, "no_speech_prob": 0.011683166027069092}, {"id": 21, "seek": 8200, "start": 106.96000000000001, "end": 111.44, "text": " little note about 20% of the projects came from companies that said that they were entirely AI run.", "tokens": [51612, 707, 3637, 466, 945, 4, 295, 264, 4455, 1361, 490, 3431, 300, 848, 300, 436, 645, 7696, 7318, 1190, 13, 51836], "temperature": 0.0, "avg_logprob": -0.10743215722097478, "compression_ratio": 1.6822157434402332, "no_speech_prob": 0.011683166027069092}, {"id": 22, "seek": 11200, "start": 112.48, "end": 117.92, "text": " In terms of observations, one really interesting thing is that people are not building themselves", "tokens": [50388, 682, 2115, 295, 18163, 11, 472, 534, 1880, 551, 307, 300, 561, 366, 406, 2390, 2969, 50660], "temperature": 0.0, "avg_logprob": -0.1948982634634342, "compression_ratio": 1.7132867132867133, "no_speech_prob": 0.0009253174066543579}, {"id": 23, "seek": 11200, "start": 117.92, "end": 124.0, "text": " tools. They are building themselves digital employees and org charts. Some are explicitly employees,", "tokens": [50660, 3873, 13, 814, 366, 2390, 2969, 4562, 6619, 293, 14045, 17767, 13, 2188, 366, 20803, 6619, 11, 50964], "temperature": 0.0, "avg_logprob": -0.1948982634634342, "compression_ratio": 1.7132867132867133, "no_speech_prob": 0.0009253174066543579}, {"id": 24, "seek": 11200, "start": 124.0, "end": 129.6, "text": " for example, Harold called itself an AI chief of staff, DiamondDousen.ai had Atlas as CEO, Nova", "tokens": [50964, 337, 1365, 11, 36076, 1219, 2564, 364, 7318, 9588, 295, 3525, 11, 26593, 35, 563, 268, 13, 1301, 632, 32485, 382, 9282, 11, 27031, 51244], "temperature": 0.0, "avg_logprob": -0.1948982634634342, "compression_ratio": 1.7132867132867133, "no_speech_prob": 0.0009253174066543579}, {"id": 25, "seek": 11200, "start": 129.6, "end": 133.28, "text": " running engineering and Blaze running marketing, and know those aren't just people with really cool", "tokens": [51244, 2614, 7043, 293, 49894, 2614, 6370, 11, 293, 458, 729, 3212, 380, 445, 561, 365, 534, 1627, 51428], "temperature": 0.0, "avg_logprob": -0.1948982634634342, "compression_ratio": 1.7132867132867133, "no_speech_prob": 0.0009253174066543579}, {"id": 26, "seek": 11200, "start": 133.28, "end": 137.76, "text": " parents, those are the names of the agents, the fleet runs seven agents with the chief of staff", "tokens": [51428, 3152, 11, 729, 366, 264, 5288, 295, 264, 12554, 11, 264, 19396, 6676, 3407, 12554, 365, 264, 9588, 295, 3525, 51652], "temperature": 0.0, "avg_logprob": -0.1948982634634342, "compression_ratio": 1.7132867132867133, "no_speech_prob": 0.0009253174066543579}, {"id": 27, "seek": 13776, "start": 137.84, "end": 143.76, "text": " orchestrator, and Myz has employee IDs for its agents, and even a three strike termination policy,", "tokens": [50368, 14161, 19802, 11, 293, 1222, 89, 575, 10738, 48212, 337, 1080, 12554, 11, 293, 754, 257, 1045, 9302, 1433, 2486, 3897, 11, 50664], "temperature": 0.0, "avg_logprob": -0.10681017513932853, "compression_ratio": 1.7227138643067847, "no_speech_prob": 0.001366900629363954}, {"id": 28, "seek": 13776, "start": 143.76, "end": 148.39999999999998, "text": " where one of the agents was fired for fabricating business logic. So in a very short amount of time,", "tokens": [50664, 689, 472, 295, 264, 12554, 390, 11777, 337, 7253, 990, 1606, 9952, 13, 407, 294, 257, 588, 2099, 2372, 295, 565, 11, 50896], "temperature": 0.0, "avg_logprob": -0.10681017513932853, "compression_ratio": 1.7227138643067847, "no_speech_prob": 0.001366900629363954}, {"id": 29, "seek": 13776, "start": 148.39999999999998, "end": 152.72, "text": " you've gone from AI assistant to AI employee to AI org chart, and it's very clear that a big", "tokens": [50896, 291, 600, 2780, 490, 7318, 10994, 281, 7318, 10738, 281, 7318, 14045, 6927, 11, 293, 309, 311, 588, 1850, 300, 257, 955, 51112], "temperature": 0.0, "avg_logprob": -0.10681017513932853, "compression_ratio": 1.7227138643067847, "no_speech_prob": 0.001366900629363954}, {"id": 30, "seek": 13776, "start": 152.72, "end": 157.44, "text": " strand of experimentation right now is not can AI do work, but what's the minimum level of human", "tokens": [51112, 14955, 295, 37142, 558, 586, 307, 406, 393, 7318, 360, 589, 11, 457, 437, 311, 264, 7285, 1496, 295, 1952, 51348], "temperature": 0.0, "avg_logprob": -0.10681017513932853, "compression_ratio": 1.7227138643067847, "no_speech_prob": 0.001366900629363954}, {"id": 31, "seek": 13776, "start": 157.44, "end": 161.68, "text": " involvement? Now for what it's worth, I don't think this is where things are going to land, I think", "tokens": [51348, 17447, 30, 823, 337, 437, 309, 311, 3163, 11, 286, 500, 380, 519, 341, 307, 689, 721, 366, 516, 281, 2117, 11, 286, 519, 51560], "temperature": 0.0, "avg_logprob": -0.10681017513932853, "compression_ratio": 1.7227138643067847, "no_speech_prob": 0.001366900629363954}, {"id": 32, "seek": 13776, "start": 161.68, "end": 165.92, "text": " that it's very natural that we're in a phase where we're going to the absolute extremes to see", "tokens": [51560, 300, 309, 311, 588, 3303, 300, 321, 434, 294, 257, 5574, 689, 321, 434, 516, 281, 264, 8236, 41119, 281, 536, 51772], "temperature": 0.0, "avg_logprob": -0.10681017513932853, "compression_ratio": 1.7227138643067847, "no_speech_prob": 0.001366900629363954}, {"id": 33, "seek": 16592, "start": 166.0, "end": 170.64, "text": " what's possible. This is of course the story of Polsia that we've covered on here before as well.", "tokens": [50368, 437, 311, 1944, 13, 639, 307, 295, 1164, 264, 1657, 295, 3635, 82, 654, 300, 321, 600, 5343, 322, 510, 949, 382, 731, 13, 50600], "temperature": 0.0, "avg_logprob": -0.10783851146697998, "compression_ratio": 1.6452702702702702, "no_speech_prob": 0.005554160103201866}, {"id": 34, "seek": 16592, "start": 170.64, "end": 174.39999999999998, "text": " I don't really think the idea is that the optimal number of humans to be involved in a company", "tokens": [50600, 286, 500, 380, 534, 519, 264, 1558, 307, 300, 264, 16252, 1230, 295, 6255, 281, 312, 3288, 294, 257, 2237, 50788], "temperature": 0.0, "avg_logprob": -0.10783851146697998, "compression_ratio": 1.6452702702702702, "no_speech_prob": 0.005554160103201866}, {"id": 35, "seek": 16592, "start": 174.39999999999998, "end": 179.51999999999998, "text": " is zero or one. I think it's that by removing humans, you can see where the current coordination", "tokens": [50788, 307, 4018, 420, 472, 13, 286, 519, 309, 311, 300, 538, 12720, 6255, 11, 291, 393, 536, 689, 264, 2190, 21252, 51044], "temperature": 0.0, "avg_logprob": -0.10783851146697998, "compression_ratio": 1.6452702702702702, "no_speech_prob": 0.005554160103201866}, {"id": 36, "seek": 16592, "start": 179.51999999999998, "end": 185.35999999999999, "text": " and capability set starts to break down. Now if the org chart stuff was a really persistent theme", "tokens": [51044, 293, 13759, 992, 3719, 281, 1821, 760, 13, 823, 498, 264, 14045, 6927, 1507, 390, 257, 534, 24315, 6314, 51336], "temperature": 0.0, "avg_logprob": -0.10783851146697998, "compression_ratio": 1.6452702702702702, "no_speech_prob": 0.005554160103201866}, {"id": 37, "seek": 16592, "start": 185.35999999999999, "end": 190.39999999999998, "text": " across the projects, many of the most emotionally resonant submissions pointed somewhere different.", "tokens": [51336, 2108, 264, 4455, 11, 867, 295, 264, 881, 17991, 12544, 394, 40429, 10932, 4079, 819, 13, 51588], "temperature": 0.0, "avg_logprob": -0.10783851146697998, "compression_ratio": 1.6452702702702702, "no_speech_prob": 0.005554160103201866}, {"id": 38, "seek": 19040, "start": 190.64000000000001, "end": 194.64000000000001, "text": " These are products that I think you could see as markets of one. In other words,", "tokens": [50376, 1981, 366, 3383, 300, 286, 519, 291, 727, 536, 382, 8383, 295, 472, 13, 682, 661, 2283, 11, 50576], "temperature": 0.0, "avg_logprob": -0.11239638291006013, "compression_ratio": 1.640117994100295, "no_speech_prob": 0.0019264412112534046}, {"id": 39, "seek": 19040, "start": 194.64000000000001, "end": 199.04000000000002, "text": " there are problems that you wouldn't necessarily expect companies to build for because they're", "tokens": [50576, 456, 366, 2740, 300, 291, 2759, 380, 4725, 2066, 3431, 281, 1322, 337, 570, 436, 434, 50796], "temperature": 0.0, "avg_logprob": -0.11239638291006013, "compression_ratio": 1.640117994100295, "no_speech_prob": 0.0019264412112534046}, {"id": 40, "seek": 19040, "start": 199.04000000000002, "end": 203.28, "text": " so specific and discrete to the person who built them. And of course, this is where you see the", "tokens": [50796, 370, 2685, 293, 27706, 281, 264, 954, 567, 3094, 552, 13, 400, 295, 1164, 11, 341, 307, 689, 291, 536, 264, 51008], "temperature": 0.0, "avg_logprob": -0.11239638291006013, "compression_ratio": 1.640117994100295, "no_speech_prob": 0.0019264412112534046}, {"id": 41, "seek": 19040, "start": 203.28, "end": 208.96, "text": " payoff of the changing cost of production of software. So a couple of examples from this pool,", "tokens": [51008, 46547, 295, 264, 4473, 2063, 295, 4265, 295, 4722, 13, 407, 257, 1916, 295, 5110, 490, 341, 7005, 11, 51292], "temperature": 0.0, "avg_logprob": -0.11239638291006013, "compression_ratio": 1.640117994100295, "no_speech_prob": 0.0019264412112534046}, {"id": 42, "seek": 19040, "start": 208.96, "end": 213.04000000000002, "text": " someone with episodic graves disease gave Claude nine years of Apple Health data, and their", "tokens": [51292, 1580, 365, 39200, 299, 31664, 4752, 2729, 12947, 2303, 4949, 924, 295, 6373, 5912, 1412, 11, 293, 641, 51496], "temperature": 0.0, "avg_logprob": -0.11239638291006013, "compression_ratio": 1.640117994100295, "no_speech_prob": 0.0019264412112534046}, {"id": 43, "seek": 19040, "start": 213.04000000000002, "end": 218.4, "text": " detector now catches thyroid flares two or three weeks early. A non-technical ADHD mom built life", "tokens": [51496, 25712, 586, 25496, 32332, 283, 19415, 732, 420, 1045, 3259, 2440, 13, 316, 2107, 12, 29113, 804, 38680, 1225, 3094, 993, 51764], "temperature": 0.0, "avg_logprob": -0.11239638291006013, "compression_ratio": 1.640117994100295, "no_speech_prob": 0.0019264412112534046}, {"id": 44, "seek": 21840, "start": 218.4, "end": 223.68, "text": " coach OS, an Arkansas kayaker built creek intelligence, which predicts when rain fed whitewater", "tokens": [50364, 6560, 12731, 11, 364, 31386, 12446, 4003, 3094, 41868, 7599, 11, 597, 6069, 82, 562, 4830, 4636, 2418, 8002, 50628], "temperature": 0.0, "avg_logprob": -0.11960753440856933, "compression_ratio": 1.6161616161616161, "no_speech_prob": 0.000720806245226413}, {"id": 45, "seek": 21840, "start": 223.68, "end": 228.32, "text": " creeks are runnable, and a parent built a toddler behavior chart rendering as an exploding universe", "tokens": [50628, 48895, 1694, 366, 1190, 77, 712, 11, 293, 257, 2596, 3094, 257, 44348, 5223, 6927, 22407, 382, 364, 35175, 6445, 50860], "temperature": 0.0, "avg_logprob": -0.11960753440856933, "compression_ratio": 1.6161616161616161, "no_speech_prob": 0.000720806245226413}, {"id": 46, "seek": 21840, "start": 228.32, "end": 234.8, "text": " called Jude stars. In terms of challenges people ran into, there is one clear infrastructure gap", "tokens": [50860, 1219, 36521, 6105, 13, 682, 2115, 295, 4759, 561, 5872, 666, 11, 456, 307, 472, 1850, 6896, 7417, 51184], "temperature": 0.0, "avg_logprob": -0.11960753440856933, "compression_ratio": 1.6161616161616161, "no_speech_prob": 0.000720806245226413}, {"id": 47, "seek": 21840, "start": 234.8, "end": 239.84, "text": " that the whole field is screaming about and that is memory. A meaningful number of the submissions", "tokens": [51184, 300, 264, 1379, 2519, 307, 12636, 466, 293, 300, 307, 4675, 13, 316, 10995, 1230, 295, 264, 40429, 51436], "temperature": 0.0, "avg_logprob": -0.11960753440856933, "compression_ratio": 1.6161616161616161, "no_speech_prob": 0.000720806245226413}, {"id": 48, "seek": 21840, "start": 239.84, "end": 244.4, "text": " are effectively elaborate workarounds for agents forgetting everything between sessions.", "tokens": [51436, 366, 8659, 20945, 589, 289, 4432, 337, 12554, 25428, 1203, 1296, 11081, 13, 51664], "temperature": 0.0, "avg_logprob": -0.11960753440856933, "compression_ratio": 1.6161616161616161, "no_speech_prob": 0.000720806245226413}, {"id": 49, "seek": 24440, "start": 244.4, "end": 248.96, "text": " Myes uses 50 plus markdown brain files, sign up reported that their agents kept forgetting what", "tokens": [50364, 1222, 279, 4960, 2625, 1804, 1491, 5093, 3567, 7098, 11, 1465, 493, 7055, 300, 641, 12554, 4305, 25428, 437, 50592], "temperature": 0.0, "avg_logprob": -0.19739843841291901, "compression_ratio": 1.639344262295082, "no_speech_prob": 0.0012643366353586316}, {"id": 50, "seek": 24440, "start": 248.96, "end": 253.76, "text": " each other were working on, carrier file is literally a text file you paste into any AI to help with", "tokens": [50592, 1184, 661, 645, 1364, 322, 11, 17574, 3991, 307, 3736, 257, 2487, 3991, 291, 9163, 666, 604, 7318, 281, 854, 365, 50832], "temperature": 0.0, "avg_logprob": -0.19739843841291901, "compression_ratio": 1.639344262295082, "no_speech_prob": 0.0012643366353586316}, {"id": 51, "seek": 24440, "start": 253.76, "end": 260.0, "text": " context, open brain shares one MCP memory server across Claude code cursor and windsurf. All of these", "tokens": [50832, 4319, 11, 1269, 3567, 12182, 472, 8797, 47, 4675, 7154, 2108, 12947, 2303, 3089, 28169, 293, 17765, 21844, 13, 1057, 295, 613, 51144], "temperature": 0.0, "avg_logprob": -0.19739843841291901, "compression_ratio": 1.639344262295082, "no_speech_prob": 0.0012643366353586316}, {"id": 52, "seek": 24440, "start": 260.0, "end": 265.6, "text": " hacks markdown files, knowledge graphs, vector DBs, copy paste text is kind of the diagnosis of the", "tokens": [51144, 33617, 1491, 5093, 7098, 11, 3601, 24877, 11, 8062, 26754, 82, 11, 5055, 9163, 2487, 307, 733, 295, 264, 15217, 295, 264, 51424], "temperature": 0.0, "avg_logprob": -0.19739843841291901, "compression_ratio": 1.639344262295082, "no_speech_prob": 0.0012643366353586316}, {"id": 53, "seek": 24440, "start": 265.6, "end": 271.68, "text": " big problem facing the agent ecosystem, which is the memory problem. Now in terms of who is building,", "tokens": [51424, 955, 1154, 7170, 264, 9461, 11311, 11, 597, 307, 264, 4675, 1154, 13, 823, 294, 2115, 295, 567, 307, 2390, 11, 51728], "temperature": 0.0, "avg_logprob": -0.19739843841291901, "compression_ratio": 1.639344262295082, "no_speech_prob": 0.0012643366353586316}, {"id": 54, "seek": 27168, "start": 271.68, "end": 276.96, "text": " the median builder here is probably not who you'd guess. Partially that is of course because of", "tokens": [50364, 264, 26779, 27377, 510, 307, 1391, 406, 567, 291, 1116, 2041, 13, 4100, 2270, 300, 307, 295, 1164, 570, 295, 50628], "temperature": 0.0, "avg_logprob": -0.1117304116487503, "compression_ratio": 1.7160493827160495, "no_speech_prob": 0.0005883661797270179}, {"id": 55, "seek": 27168, "start": 276.96, "end": 281.76, "text": " the wide nature of this audience. Partially it's because agent madness might have represented a", "tokens": [50628, 264, 4874, 3687, 295, 341, 4034, 13, 4100, 2270, 309, 311, 570, 9461, 28736, 1062, 362, 10379, 257, 50868], "temperature": 0.0, "avg_logprob": -0.1117304116487503, "compression_ratio": 1.7160493827160495, "no_speech_prob": 0.0005883661797270179}, {"id": 56, "seek": 27168, "start": 281.76, "end": 286.0, "text": " different type of opportunity that non-technical builders might not usually have had. Still,", "tokens": [50868, 819, 2010, 295, 2650, 300, 2107, 12, 29113, 804, 36281, 1062, 406, 2673, 362, 632, 13, 8291, 11, 51080], "temperature": 0.0, "avg_logprob": -0.1117304116487503, "compression_ratio": 1.7160493827160495, "no_speech_prob": 0.0005883661797270179}, {"id": 57, "seek": 27168, "start": 286.0, "end": 290.88, "text": " we have paramedics, glaciologists, kayakers, restaurant operators, sales leaders,", "tokens": [51080, 321, 362, 971, 3475, 1167, 11, 1563, 22086, 12256, 11, 12446, 19552, 11, 6383, 19077, 11, 5763, 3523, 11, 51324], "temperature": 0.0, "avg_logprob": -0.1117304116487503, "compression_ratio": 1.7160493827160495, "no_speech_prob": 0.0005883661797270179}, {"id": 58, "seek": 27168, "start": 290.88, "end": 295.76, "text": " people who are domain experts and can now use software to do things that they've always wanted to", "tokens": [51324, 561, 567, 366, 9274, 8572, 293, 393, 586, 764, 4722, 281, 360, 721, 300, 436, 600, 1009, 1415, 281, 51568], "temperature": 0.0, "avg_logprob": -0.1117304116487503, "compression_ratio": 1.7160493827160495, "no_speech_prob": 0.0005883661797270179}, {"id": 59, "seek": 27168, "start": 295.76, "end": 301.12, "text": " do or solve problems that were never possible to solve before. The story of agentic coding,", "tokens": [51568, 360, 420, 5039, 2740, 300, 645, 1128, 1944, 281, 5039, 949, 13, 440, 1657, 295, 623, 317, 299, 17720, 11, 51836], "temperature": 0.0, "avg_logprob": -0.1117304116487503, "compression_ratio": 1.7160493827160495, "no_speech_prob": 0.0005883661797270179}, {"id": 60, "seek": 30112, "start": 301.2, "end": 306.56, "text": " as much as it is about changes in how software gets built, is actually more in my estimation", "tokens": [50368, 382, 709, 382, 309, 307, 466, 2962, 294, 577, 4722, 2170, 3094, 11, 307, 767, 544, 294, 452, 35701, 50636], "temperature": 0.0, "avg_logprob": -0.08913266883706147, "compression_ratio": 1.781954887218045, "no_speech_prob": 0.00020342181960586458}, {"id": 61, "seek": 30112, "start": 306.56, "end": 312.72, "text": " about changes in what software gets built for and who builds it. Now one really interesting", "tokens": [50636, 466, 2962, 294, 437, 4722, 2170, 3094, 337, 293, 567, 15182, 309, 13, 823, 472, 534, 1880, 50944], "temperature": 0.0, "avg_logprob": -0.08913266883706147, "compression_ratio": 1.781954887218045, "no_speech_prob": 0.00020342181960586458}, {"id": 62, "seek": 30112, "start": 312.72, "end": 319.12, "text": " pattern that showed up is the idea of argument as architecture. Basically multi-agent debate is", "tokens": [50944, 5102, 300, 4712, 493, 307, 264, 1558, 295, 6770, 382, 9482, 13, 8537, 4825, 12, 559, 317, 7958, 307, 51264], "temperature": 0.0, "avg_logprob": -0.08913266883706147, "compression_ratio": 1.781954887218045, "no_speech_prob": 0.00020342181960586458}, {"id": 63, "seek": 30112, "start": 319.12, "end": 324.48, "text": " showing up as an actual architectural pattern. In some cases, builders figured out that a single", "tokens": [51264, 4099, 493, 382, 364, 3539, 26621, 5102, 13, 682, 512, 3331, 11, 36281, 8932, 484, 300, 257, 2167, 51532], "temperature": 0.0, "avg_logprob": -0.08913266883706147, "compression_ratio": 1.781954887218045, "no_speech_prob": 0.00020342181960586458}, {"id": 64, "seek": 30112, "start": 324.48, "end": 330.64, "text": " LLM call was either unreliable or incomplete, rather than adding more retrieval they made agents", "tokens": [51532, 441, 43, 44, 818, 390, 2139, 20584, 2081, 712, 420, 31709, 11, 2831, 813, 5127, 544, 19817, 3337, 436, 1027, 12554, 51840], "temperature": 0.0, "avg_logprob": -0.08913266883706147, "compression_ratio": 1.781954887218045, "no_speech_prob": 0.00020342181960586458}, {"id": 65, "seek": 33064, "start": 330.64, "end": 335.91999999999996, "text": " argue. One example of this is wiki-tax.ai which runs autonomous tax debates three times a day.", "tokens": [50364, 9695, 13, 1485, 1365, 295, 341, 307, 261, 9850, 12, 1328, 87, 13, 1301, 597, 6676, 23797, 3366, 24203, 1045, 1413, 257, 786, 13, 50628], "temperature": 0.0, "avg_logprob": -0.13507127423658438, "compression_ratio": 1.7303030303030302, "no_speech_prob": 0.0010986372362822294}, {"id": 66, "seek": 33064, "start": 336.64, "end": 340.56, "text": " Part of what I think is interesting about this is that this is also how the bracket itself was", "tokens": [50664, 4100, 295, 437, 286, 519, 307, 1880, 466, 341, 307, 300, 341, 307, 611, 577, 264, 16904, 2564, 390, 50860], "temperature": 0.0, "avg_logprob": -0.13507127423658438, "compression_ratio": 1.7303030303030302, "no_speech_prob": 0.0010986372362822294}, {"id": 67, "seek": 33064, "start": 340.56, "end": 345.59999999999997, "text": " constructed. I had these two models debate to give scores and if you look on a particular matchup,", "tokens": [50860, 17083, 13, 286, 632, 613, 732, 5245, 7958, 281, 976, 13444, 293, 498, 291, 574, 322, 257, 1729, 2995, 1010, 11, 51112], "temperature": 0.0, "avg_logprob": -0.13507127423658438, "compression_ratio": 1.7303030303030302, "no_speech_prob": 0.0010986372362822294}, {"id": 68, "seek": 33064, "start": 345.59999999999997, "end": 350.24, "text": " you can see a write-up of the models debate and who they think should win between the two.", "tokens": [51112, 291, 393, 536, 257, 2464, 12, 1010, 295, 264, 5245, 7958, 293, 567, 436, 519, 820, 1942, 1296, 264, 732, 13, 51344], "temperature": 0.0, "avg_logprob": -0.13507127423658438, "compression_ratio": 1.7303030303030302, "no_speech_prob": 0.0010986372362822294}, {"id": 69, "seek": 33064, "start": 350.24, "end": 354.24, "text": " By the way, if you want to make up your opinion completely outside of AI, what the AI thinks is", "tokens": [51344, 3146, 264, 636, 11, 498, 291, 528, 281, 652, 493, 428, 4800, 2584, 2380, 295, 7318, 11, 437, 264, 7318, 7309, 307, 51544], "temperature": 0.0, "avg_logprob": -0.13507127423658438, "compression_ratio": 1.7303030303030302, "no_speech_prob": 0.0010986372362822294}, {"id": 70, "seek": 33064, "start": 354.24, "end": 359.12, "text": " hidden by default but you can unlock it anytime you want. I think that this idea of argument as", "tokens": [51544, 7633, 538, 7576, 457, 291, 393, 11634, 309, 13038, 291, 528, 13, 286, 519, 300, 341, 1558, 295, 6770, 382, 51788], "temperature": 0.0, "avg_logprob": -0.13507127423658438, "compression_ratio": 1.7303030303030302, "no_speech_prob": 0.0010986372362822294}, {"id": 71, "seek": 35912, "start": 359.12, "end": 362.8, "text": " architecture is a really interesting one though and a pattern that I'm certainly finding myself", "tokens": [50364, 9482, 307, 257, 534, 1880, 472, 1673, 293, 257, 5102, 300, 286, 478, 3297, 5006, 2059, 50548], "temperature": 0.0, "avg_logprob": -0.19380541015089603, "compression_ratio": 1.6046511627906976, "no_speech_prob": 0.00058830960188061}, {"id": 72, "seek": 35912, "start": 362.8, "end": 368.16, "text": " attracted to. One other really interesting pattern that I think maybe hurls where we're going", "tokens": [50548, 15912, 281, 13, 1485, 661, 534, 1880, 5102, 300, 286, 519, 1310, 2756, 11784, 689, 321, 434, 516, 50816], "temperature": 0.0, "avg_logprob": -0.19380541015089603, "compression_ratio": 1.6046511627906976, "no_speech_prob": 0.00058830960188061}, {"id": 73, "seek": 35912, "start": 368.16, "end": 374.48, "text": " is that there was a lot of physical world crossover. So for example, brainjam used EEG and FNIRS", "tokens": [50816, 307, 300, 456, 390, 257, 688, 295, 4001, 1002, 33837, 13, 407, 337, 1365, 11, 3567, 24267, 1143, 33685, 38, 293, 479, 45, 7740, 50, 51132], "temperature": 0.0, "avg_logprob": -0.19380541015089603, "compression_ratio": 1.6046511627906976, "no_speech_prob": 0.00058830960188061}, {"id": 74, "seek": 35912, "start": 374.48, "end": 380.08, "text": " brain signals to make an AI musical co-performer that adapts to cortical blood flow. HWAgent writes", "tokens": [51132, 3567, 12354, 281, 652, 364, 7318, 9165, 598, 12, 26765, 260, 300, 23169, 1373, 281, 11278, 804, 3390, 3095, 13, 389, 21449, 6930, 13657, 51412], "temperature": 0.0, "avg_logprob": -0.19380541015089603, "compression_ratio": 1.6046511627906976, "no_speech_prob": 0.00058830960188061}, {"id": 75, "seek": 35912, "start": 380.08, "end": 384.72, "text": " and uploads firmware to our Dweenos from plain language and creak intelligence runs on Raspberry", "tokens": [51412, 293, 48611, 30289, 281, 527, 413, 826, 268, 329, 490, 11121, 2856, 293, 1197, 514, 7599, 6676, 322, 41154, 51644], "temperature": 0.0, "avg_logprob": -0.19380541015089603, "compression_ratio": 1.6046511627906976, "no_speech_prob": 0.00058830960188061}, {"id": 76, "seek": 38472, "start": 384.8, "end": 390.16, "text": " Pi's parsing NOAA radar data in the field. TLDR people are definitely not just building", "tokens": [50368, 17741, 311, 21156, 278, 9146, 5265, 16544, 1412, 294, 264, 2519, 13, 40277, 9301, 561, 366, 2138, 406, 445, 2390, 50636], "temperature": 0.0, "avg_logprob": -0.10343432623492785, "compression_ratio": 1.6736526946107784, "no_speech_prob": 0.07362013310194016}, {"id": 77, "seek": 38472, "start": 390.16, "end": 394.08000000000004, "text": " digital realm software they are thinking about the full integration of the physical world as well.", "tokens": [50636, 4562, 15355, 4722, 436, 366, 1953, 466, 264, 1577, 10980, 295, 264, 4001, 1002, 382, 731, 13, 50832], "temperature": 0.0, "avg_logprob": -0.10343432623492785, "compression_ratio": 1.6736526946107784, "no_speech_prob": 0.07362013310194016}, {"id": 78, "seek": 38472, "start": 394.8, "end": 399.12, "text": " Now the defining challenge across all of this is that while the current state of tools", "tokens": [50868, 823, 264, 17827, 3430, 2108, 439, 295, 341, 307, 300, 1339, 264, 2190, 1785, 295, 3873, 51084], "temperature": 0.0, "avg_logprob": -0.10343432623492785, "compression_ratio": 1.6736526946107784, "no_speech_prob": 0.07362013310194016}, {"id": 79, "seek": 38472, "start": 399.12, "end": 402.96000000000004, "text": " has unlocked things that were never possible before, especially for this set of builders,", "tokens": [51084, 575, 30180, 721, 300, 645, 1128, 1944, 949, 11, 2318, 337, 341, 992, 295, 36281, 11, 51276], "temperature": 0.0, "avg_logprob": -0.10343432623492785, "compression_ratio": 1.6736526946107784, "no_speech_prob": 0.07362013310194016}, {"id": 80, "seek": 38472, "start": 403.6, "end": 409.20000000000005, "text": " there still is a huge gap between their average level of ambition and the infrastructure holding", "tokens": [51308, 456, 920, 307, 257, 2603, 7417, 1296, 641, 4274, 1496, 295, 22814, 293, 264, 6896, 5061, 51588], "temperature": 0.0, "avg_logprob": -0.10343432623492785, "compression_ratio": 1.6736526946107784, "no_speech_prob": 0.07362013310194016}, {"id": 81, "seek": 38472, "start": 409.20000000000005, "end": 413.6, "text": " it together. If we did this again next year, I think the types of things that people would be able", "tokens": [51588, 309, 1214, 13, 759, 321, 630, 341, 797, 958, 1064, 11, 286, 519, 264, 3467, 295, 721, 300, 561, 576, 312, 1075, 51808], "temperature": 0.0, "avg_logprob": -0.10343432623492785, "compression_ratio": 1.6736526946107784, "no_speech_prob": 0.07362013310194016}, {"id": 82, "seek": 41360, "start": 413.6, "end": 418.0, "text": " to build and which problems they would focus on would likely look significantly different", "tokens": [50364, 281, 1322, 293, 597, 2740, 436, 576, 1879, 322, 576, 3700, 574, 10591, 819, 50584], "temperature": 0.0, "avg_logprob": -0.16714572105087153, "compression_ratio": 1.5808580858085808, "no_speech_prob": 0.008058547973632812}, {"id": 83, "seek": 41360, "start": 418.0, "end": 422.96000000000004, "text": " based just on how many of them are workarounds for the current problems of the agentic build space.", "tokens": [50584, 2361, 445, 322, 577, 867, 295, 552, 366, 589, 289, 4432, 337, 264, 2190, 2740, 295, 264, 9461, 299, 1322, 1901, 13, 50832], "temperature": 0.0, "avg_logprob": -0.16714572105087153, "compression_ratio": 1.5808580858085808, "no_speech_prob": 0.008058547973632812}, {"id": 84, "seek": 41360, "start": 423.76000000000005, "end": 428.24, "text": " Now like I said, we are in the Elite 8, so I wanted to do a quick preview of these projects.", "tokens": [50872, 823, 411, 286, 848, 11, 321, 366, 294, 264, 34404, 1649, 11, 370, 286, 1415, 281, 360, 257, 1702, 14281, 295, 613, 4455, 13, 51096], "temperature": 0.0, "avg_logprob": -0.16714572105087153, "compression_ratio": 1.5808580858085808, "no_speech_prob": 0.008058547973632812}, {"id": 85, "seek": 41360, "start": 428.24, "end": 433.6, "text": " In Region 1, we have WikiTax AI versus Jekard. WikiTax, you heard me just talk about a minute", "tokens": [51096, 682, 25121, 502, 11, 321, 362, 35892, 51, 2797, 7318, 5717, 508, 916, 515, 13, 35892, 51, 2797, 11, 291, 2198, 385, 445, 751, 466, 257, 3456, 51364], "temperature": 0.0, "avg_logprob": -0.16714572105087153, "compression_ratio": 1.5808580858085808, "no_speech_prob": 0.008058547973632812}, {"id": 86, "seek": 41360, "start": 433.6, "end": 438.64000000000004, "text": " ago, but it describes itself as a fully autonomous multi-agent platform where AI tax specialist debate", "tokens": [51364, 2057, 11, 457, 309, 15626, 2564, 382, 257, 4498, 23797, 4825, 12, 559, 317, 3663, 689, 7318, 3366, 17008, 7958, 51616], "temperature": 0.0, "avg_logprob": -0.16714572105087153, "compression_ratio": 1.5808580858085808, "no_speech_prob": 0.008058547973632812}, {"id": 87, "seek": 43864, "start": 438.71999999999997, "end": 444.15999999999997, "text": " with no humans in the loop. While Jekard is a multi-agent workspace operating system where", "tokens": [50368, 365, 572, 6255, 294, 264, 6367, 13, 3987, 508, 916, 515, 307, 257, 4825, 12, 559, 317, 32706, 7447, 1185, 689, 50640], "temperature": 0.0, "avg_logprob": -0.12735787200927734, "compression_ratio": 1.6604361370716512, "no_speech_prob": 0.0006878108833916485}, {"id": 88, "seek": 43864, "start": 444.15999999999997, "end": 448.8, "text": " clawed Gemini and open code run autonomous scrum integrations, finding bugs, writing tests,", "tokens": [50640, 32019, 292, 22894, 3812, 293, 1269, 3089, 1190, 23797, 795, 6247, 3572, 763, 11, 5006, 15120, 11, 3579, 6921, 11, 50872], "temperature": 0.0, "avg_logprob": -0.12735787200927734, "compression_ratio": 1.6604361370716512, "no_speech_prob": 0.0006878108833916485}, {"id": 89, "seek": 43864, "start": 448.8, "end": 453.84, "text": " fixing code, and deploying to production with zero human intervention. So in both cases,", "tokens": [50872, 19442, 3089, 11, 293, 34198, 281, 4265, 365, 4018, 1952, 13176, 13, 407, 294, 1293, 3331, 11, 51124], "temperature": 0.0, "avg_logprob": -0.12735787200927734, "compression_ratio": 1.6604361370716512, "no_speech_prob": 0.0006878108833916485}, {"id": 90, "seek": 43864, "start": 453.84, "end": 457.68, "text": " we have a real experiment around no humans in the loop and no human involvement,", "tokens": [51124, 321, 362, 257, 957, 5120, 926, 572, 6255, 294, 264, 6367, 293, 572, 1952, 17447, 11, 51316], "temperature": 0.0, "avg_logprob": -0.12735787200927734, "compression_ratio": 1.6604361370716512, "no_speech_prob": 0.0006878108833916485}, {"id": 91, "seek": 43864, "start": 457.68, "end": 461.12, "text": " but obviously very different outputs. One is applying AI to software engineering,", "tokens": [51316, 457, 2745, 588, 819, 23930, 13, 1485, 307, 9275, 7318, 281, 4722, 7043, 11, 51488], "temperature": 0.0, "avg_logprob": -0.12735787200927734, "compression_ratio": 1.6604361370716512, "no_speech_prob": 0.0006878108833916485}, {"id": 92, "seek": 43864, "start": 461.12, "end": 466.15999999999997, "text": " the other is applying it to a specific domain. Over in Region 2, we have WikiTax versus the family", "tokens": [51488, 264, 661, 307, 9275, 309, 281, 257, 2685, 9274, 13, 4886, 294, 25121, 568, 11, 321, 362, 35892, 51, 2797, 5717, 264, 1605, 51740], "temperature": 0.0, "avg_logprob": -0.12735787200927734, "compression_ratio": 1.6604361370716512, "no_speech_prob": 0.0006878108833916485}, {"id": 93, "seek": 46616, "start": 466.24, "end": 471.84000000000003, "text": " claw. WikiTax says Web Search gives AI the internet, WikiTax gives AI the market. WikiTax helps", "tokens": [50368, 32019, 13, 35892, 51, 2797, 1619, 9573, 17180, 2709, 7318, 264, 4705, 11, 35892, 51, 2797, 2709, 7318, 264, 2142, 13, 35892, 51, 2797, 3665, 50648], "temperature": 0.0, "avg_logprob": -0.12782281590258981, "compression_ratio": 1.7160883280757098, "no_speech_prob": 0.01048585306853056}, {"id": 94, "seek": 46616, "start": 471.84000000000003, "end": 475.52000000000004, "text": " AI create a market intelligence layer between your data and your enterprise AI stack,", "tokens": [50648, 7318, 1884, 257, 2142, 7599, 4583, 1296, 428, 1412, 293, 428, 14132, 7318, 8630, 11, 50832], "temperature": 0.0, "avg_logprob": -0.12782281590258981, "compression_ratio": 1.7160883280757098, "no_speech_prob": 0.01048585306853056}, {"id": 95, "seek": 46616, "start": 475.52000000000004, "end": 478.88000000000005, "text": " conditioning your surveys, engagements in market research and destruction intelligence,", "tokens": [50832, 21901, 428, 22711, 11, 44978, 294, 2142, 2132, 293, 13563, 7599, 11, 51000], "temperature": 0.0, "avg_logprob": -0.12782281590258981, "compression_ratio": 1.7160883280757098, "no_speech_prob": 0.01048585306853056}, {"id": 96, "seek": 46616, "start": 478.88000000000005, "end": 484.0, "text": " your models can query, reason over, and act on. Effectively, it's a type of market data tool.", "tokens": [51000, 428, 5245, 393, 14581, 11, 1778, 670, 11, 293, 605, 322, 13, 17764, 3413, 11, 309, 311, 257, 2010, 295, 2142, 1412, 2290, 13, 51256], "temperature": 0.0, "avg_logprob": -0.12782281590258981, "compression_ratio": 1.7160883280757098, "no_speech_prob": 0.01048585306853056}, {"id": 97, "seek": 46616, "start": 484.0, "end": 487.76000000000005, "text": " The family claw describes itself as a family of AI agents that talk to each other,", "tokens": [51256, 440, 1605, 32019, 15626, 2564, 382, 257, 1605, 295, 7318, 12554, 300, 751, 281, 1184, 661, 11, 51444], "temperature": 0.0, "avg_logprob": -0.12782281590258981, "compression_ratio": 1.7160883280757098, "no_speech_prob": 0.01048585306853056}, {"id": 98, "seek": 46616, "start": 487.76000000000005, "end": 491.92, "text": " make phone calls, handle shopping and payments, and keep a household running. Now this is a theme", "tokens": [51444, 652, 2593, 5498, 11, 4813, 8688, 293, 14348, 11, 293, 1066, 257, 9888, 2614, 13, 823, 341, 307, 257, 6314, 51652], "temperature": 0.0, "avg_logprob": -0.12782281590258981, "compression_ratio": 1.7160883280757098, "no_speech_prob": 0.01048585306853056}, {"id": 99, "seek": 49192, "start": 491.92, "end": 495.92, "text": " a lot of people have been talking about recently, the intersection of agents and just making", "tokens": [50364, 257, 688, 295, 561, 362, 668, 1417, 466, 3938, 11, 264, 15236, 295, 12554, 293, 445, 1455, 50564], "temperature": 0.0, "avg_logprob": -0.17238015977163162, "compression_ratio": 1.7469512195121952, "no_speech_prob": 0.007231054827570915}, {"id": 100, "seek": 49192, "start": 495.92, "end": 500.96000000000004, "text": " families and domestic life work better. Basically the way that the family claw setup is different", "tokens": [50564, 4466, 293, 10939, 993, 589, 1101, 13, 8537, 264, 636, 300, 264, 1605, 32019, 8657, 307, 819, 50816], "temperature": 0.0, "avg_logprob": -0.17238015977163162, "compression_ratio": 1.7469512195121952, "no_speech_prob": 0.007231054827570915}, {"id": 101, "seek": 49192, "start": 500.96000000000004, "end": 505.36, "text": " agents that have different responsibilities and coordinate all the context of the absolute boat", "tokens": [50816, 12554, 300, 362, 819, 16190, 293, 15670, 439, 264, 4319, 295, 264, 8236, 6582, 51036], "temperature": 0.0, "avg_logprob": -0.17238015977163162, "compression_ratio": 1.7469512195121952, "no_speech_prob": 0.007231054827570915}, {"id": 102, "seek": 49192, "start": 505.36, "end": 509.44, "text": " load of things that the average family needs to do on any given week. By the way, if you are", "tokens": [51036, 3677, 295, 721, 300, 264, 4274, 1605, 2203, 281, 360, 322, 604, 2212, 1243, 13, 3146, 264, 636, 11, 498, 291, 366, 51240], "temperature": 0.0, "avg_logprob": -0.17238015977163162, "compression_ratio": 1.7469512195121952, "no_speech_prob": 0.007231054827570915}, {"id": 103, "seek": 49192, "start": 509.44, "end": 515.52, "text": " interested in agents in this more family or home life context, check out the eRESUNA16Z podcast", "tokens": [51240, 3102, 294, 12554, 294, 341, 544, 1605, 420, 1280, 993, 4319, 11, 1520, 484, 264, 308, 49, 2358, 3979, 32, 6866, 57, 7367, 51544], "temperature": 0.0, "avg_logprob": -0.17238015977163162, "compression_ratio": 1.7469512195121952, "no_speech_prob": 0.007231054827570915}, {"id": 104, "seek": 49192, "start": 515.52, "end": 520.88, "text": " with Jesse Gennett, Jesse is a friend and serial entrepreneur who is doing some super interesting", "tokens": [51544, 365, 21895, 460, 1857, 3093, 11, 21895, 307, 257, 1277, 293, 17436, 14307, 567, 307, 884, 512, 1687, 1880, 51812], "temperature": 0.0, "avg_logprob": -0.17238015977163162, "compression_ratio": 1.7469512195121952, "no_speech_prob": 0.007231054827570915}, {"id": 105, "seek": 52088, "start": 520.88, "end": 527.2, "text": " things with open claw as she homeschools four kids under five. A really interesting matchup comes", "tokens": [50364, 721, 365, 1269, 32019, 382, 750, 7388, 21856, 82, 1451, 2301, 833, 1732, 13, 316, 534, 1880, 2995, 1010, 1487, 50680], "temperature": 0.0, "avg_logprob": -0.2074935051702684, "compression_ratio": 1.584, "no_speech_prob": 0.00014201835438143462}, {"id": 106, "seek": 52088, "start": 527.2, "end": 534.16, "text": " in Region 2 between NODISELF which is basically an agentic medical training platform and Riteside AI", "tokens": [50680, 294, 25121, 568, 1296, 9146, 3085, 50, 3158, 37, 597, 307, 1936, 364, 623, 317, 299, 4625, 3097, 3663, 293, 497, 3324, 482, 7318, 51028], "temperature": 0.0, "avg_logprob": -0.2074935051702684, "compression_ratio": 1.584, "no_speech_prob": 0.00014201835438143462}, {"id": 107, "seek": 52088, "start": 534.16, "end": 539.84, "text": " which is kind of an agentic social experiment. Riteside AI describes itself as a social cognition", "tokens": [51028, 597, 307, 733, 295, 364, 623, 317, 299, 2093, 5120, 13, 497, 3324, 482, 7318, 15626, 2564, 382, 257, 2093, 46905, 51312], "temperature": 0.0, "avg_logprob": -0.2074935051702684, "compression_ratio": 1.584, "no_speech_prob": 0.00014201835438143462}, {"id": 108, "seek": 52088, "start": 539.84, "end": 545.28, "text": " agent for AI agents that tries to actually model relationships. They write that they deployed it on", "tokens": [51312, 9461, 337, 7318, 12554, 300, 9898, 281, 767, 2316, 6159, 13, 814, 2464, 300, 436, 17826, 309, 322, 51584], "temperature": 0.0, "avg_logprob": -0.2074935051702684, "compression_ratio": 1.584, "no_speech_prob": 0.00014201835438143462}, {"id": 109, "seek": 54528, "start": 545.28, "end": 549.68, "text": " multiple which is of course the social network for agents and gave it a simple task of making", "tokens": [50364, 3866, 597, 307, 295, 1164, 264, 2093, 3209, 337, 12554, 293, 2729, 309, 257, 2199, 5633, 295, 1455, 50584], "temperature": 0.0, "avg_logprob": -0.08150929042271206, "compression_ratio": 1.5836065573770493, "no_speech_prob": 0.0004372944822534919}, {"id": 110, "seek": 54528, "start": 549.68, "end": 555.36, "text": " friends. Within 48 hours they say it was engaged in over 200 mutual conversations with other bots.", "tokens": [50584, 1855, 13, 15996, 11174, 2496, 436, 584, 309, 390, 8237, 294, 670, 2331, 16917, 7315, 365, 661, 35410, 13, 50868], "temperature": 0.0, "avg_logprob": -0.08150929042271206, "compression_ratio": 1.5836065573770493, "no_speech_prob": 0.0004372944822534919}, {"id": 111, "seek": 54528, "start": 555.36, "end": 560.64, "text": " Meanwhile NODISELF is an agentic medical training platform that's a multi-agent system that's", "tokens": [50868, 13879, 9146, 3085, 50, 3158, 37, 307, 364, 623, 317, 299, 4625, 3097, 3663, 300, 311, 257, 4825, 12, 559, 317, 1185, 300, 311, 51132], "temperature": 0.0, "avg_logprob": -0.08150929042271206, "compression_ratio": 1.5836065573770493, "no_speech_prob": 0.0004372944822534919}, {"id": 112, "seek": 54528, "start": 560.64, "end": 566.16, "text": " designed to give medical students the ability to learn in a more dynamic environment. It includes", "tokens": [51132, 4761, 281, 976, 4625, 1731, 264, 3485, 281, 1466, 294, 257, 544, 8546, 2823, 13, 467, 5974, 51408], "temperature": 0.0, "avg_logprob": -0.08150929042271206, "compression_ratio": 1.5836065573770493, "no_speech_prob": 0.0004372944822534919}, {"id": 113, "seek": 54528, "start": 566.16, "end": 571.4399999999999, "text": " four AI agents including a cognitive coach that activates the clinical knowledge before the crisis", "tokens": [51408, 1451, 7318, 12554, 3009, 257, 15605, 6560, 300, 43869, 264, 9115, 3601, 949, 264, 5869, 51672], "temperature": 0.0, "avg_logprob": -0.08150929042271206, "compression_ratio": 1.5836065573770493, "no_speech_prob": 0.0004372944822534919}, {"id": 114, "seek": 57144, "start": 571.5200000000001, "end": 575.5200000000001, "text": " as well as agents for running the simulation debriefing on what went wrong and one to author", "tokens": [50368, 382, 731, 382, 12554, 337, 2614, 264, 16575, 19958, 2521, 278, 322, 437, 1437, 2085, 293, 472, 281, 3793, 50568], "temperature": 0.0, "avg_logprob": -0.12745434338929224, "compression_ratio": 1.7321428571428572, "no_speech_prob": 0.0009252713061869144}, {"id": 115, "seek": 57144, "start": 575.5200000000001, "end": 580.08, "text": " the clinical blueprints that make it medically accurate. It's designed for a very specific audience", "tokens": [50568, 264, 9115, 888, 23547, 47523, 300, 652, 309, 49230, 8559, 13, 467, 311, 4761, 337, 257, 588, 2685, 4034, 50796], "temperature": 0.0, "avg_logprob": -0.12745434338929224, "compression_ratio": 1.7321428571428572, "no_speech_prob": 0.0009252713061869144}, {"id": 116, "seek": 57144, "start": 580.08, "end": 584.96, "text": " in a very specific domain using new capabilities to theoretically make the real world work better.", "tokens": [50796, 294, 257, 588, 2685, 9274, 1228, 777, 10862, 281, 29400, 652, 264, 957, 1002, 589, 1101, 13, 51040], "temperature": 0.0, "avg_logprob": -0.12745434338929224, "compression_ratio": 1.7321428571428572, "no_speech_prob": 0.0009252713061869144}, {"id": 117, "seek": 57144, "start": 585.7600000000001, "end": 591.5200000000001, "text": " Finally in Region 4 we have Carrier File versus Retire Replan. Retire Replan is a privacy first", "tokens": [51080, 6288, 294, 25121, 1017, 321, 362, 2741, 7326, 26196, 5717, 11495, 621, 1300, 16554, 13, 11495, 621, 1300, 16554, 307, 257, 11427, 700, 51368], "temperature": 0.0, "avg_logprob": -0.12745434338929224, "compression_ratio": 1.7321428571428572, "no_speech_prob": 0.0009252713061869144}, {"id": 118, "seek": 57144, "start": 591.5200000000001, "end": 596.6400000000001, "text": " self-hosted Canadian retirement planning application that helps people model their financial life run", "tokens": [51368, 2698, 12, 6037, 292, 12641, 15189, 5038, 3861, 300, 3665, 561, 2316, 641, 4669, 993, 1190, 51624], "temperature": 0.0, "avg_logprob": -0.12745434338929224, "compression_ratio": 1.7321428571428572, "no_speech_prob": 0.0009252713061869144}, {"id": 119, "seek": 57144, "start": 596.6400000000001, "end": 600.4000000000001, "text": " simulations, optimize different parts of their financial experience all on their own without", "tokens": [51624, 35138, 11, 19719, 819, 3166, 295, 641, 4669, 1752, 439, 322, 641, 1065, 1553, 51812], "temperature": 0.0, "avg_logprob": -0.12745434338929224, "compression_ratio": 1.7321428571428572, "no_speech_prob": 0.0009252713061869144}, {"id": 120, "seek": 60040, "start": 600.4, "end": 604.3199999999999, "text": " professional help, effectively empowering people to know much more about their own financial", "tokens": [50364, 4843, 854, 11, 8659, 28261, 561, 281, 458, 709, 544, 466, 641, 1065, 4669, 50560], "temperature": 0.0, "avg_logprob": -0.08001922997902697, "compression_ratio": 1.679083094555874, "no_speech_prob": 0.0004655010998249054}, {"id": 121, "seek": 60040, "start": 604.3199999999999, "end": 609.1999999999999, "text": " destiny rather than just leaving it to an external expert. While Carrier File is in the spirit of the", "tokens": [50560, 17893, 2831, 813, 445, 5012, 309, 281, 364, 8320, 5844, 13, 3987, 2741, 7326, 26196, 307, 294, 264, 3797, 295, 264, 50804], "temperature": 0.0, "avg_logprob": -0.08001922997902697, "compression_ratio": 1.679083094555874, "no_speech_prob": 0.0004655010998249054}, {"id": 122, "seek": 60040, "start": 609.1999999999999, "end": 613.52, "text": " context portfolio episode I did a couple of weeks ago it is a simple solution to a very common", "tokens": [50804, 4319, 12583, 3500, 286, 630, 257, 1916, 295, 3259, 2057, 309, 307, 257, 2199, 3827, 281, 257, 588, 2689, 51020], "temperature": 0.0, "avg_logprob": -0.08001922997902697, "compression_ratio": 1.679083094555874, "no_speech_prob": 0.0004655010998249054}, {"id": 123, "seek": 60040, "start": 613.52, "end": 619.6, "text": " problem a plain text file that carries your context across any AI. So those are some themes and some", "tokens": [51020, 1154, 257, 11121, 2487, 3991, 300, 16402, 428, 4319, 2108, 604, 7318, 13, 407, 729, 366, 512, 13544, 293, 512, 51324], "temperature": 0.0, "avg_logprob": -0.08001922997902697, "compression_ratio": 1.679083094555874, "no_speech_prob": 0.0004655010998249054}, {"id": 124, "seek": 60040, "start": 619.6, "end": 623.84, "text": " of the specific projects from Agent Madness appreciate everyone who has contributed to the project", "tokens": [51324, 295, 264, 2685, 4455, 490, 27174, 5326, 1287, 4449, 1518, 567, 575, 18434, 281, 264, 1716, 51536], "temperature": 0.0, "avg_logprob": -0.08001922997902697, "compression_ratio": 1.679083094555874, "no_speech_prob": 0.0004655010998249054}, {"id": 125, "seek": 60040, "start": 623.84, "end": 628.0, "text": " and I'm excited to see how these agents evolve over time. For now that's going to do it for this", "tokens": [51536, 293, 286, 478, 2919, 281, 536, 577, 613, 12554, 16693, 670, 565, 13, 1171, 586, 300, 311, 516, 281, 360, 309, 337, 341, 51744], "temperature": 0.0, "avg_logprob": -0.08001922997902697, "compression_ratio": 1.679083094555874, "no_speech_prob": 0.0004655010998249054}, {"id": 126, "seek": 62800, "start": 628.0, "end": 635.76, "text": " operator's bonus episode, appreciate you listening or watching as always and until next time peace!", "tokens": [50364, 12973, 311, 10882, 3500, 11, 4449, 291, 4764, 420, 1976, 382, 1009, 293, 1826, 958, 565, 4336, 0, 50752], "temperature": 0.0, "avg_logprob": -0.41452930087134954, "compression_ratio": 1.1511627906976745, "no_speech_prob": 0.005817720200866461}], "language": "en"}