humanoid robots learning to fear, recession predictions, and a new look at neural network memory
Angelina Zaitseva
In 2022, Sri Lanka’s government collapsed within just a few months. Fiscal irresponsibility and the lingering effects
01. VC after overheating
Q3 European VC Valuations Report (PitchBook)
In its latest report for Q3 2025, PitchBook examined how the valuations of European startups have changed amid the continued growth of the AI segment and investors' increasingly selective approach to risk. It found out that the market remained highly valued but began to shift clearly into a phase of moderate cooling: a number of metrics slowed down, the first signs of overheating in AI appeared, and liquidity remained limited.
The report reveals that in almost all stages — from Series A and B to later rounds — median valuations in 2025 grew relative to 2024. The valuations of the latest funding rounds grew the most: the average value of companies before attracting investment exceeded €1.6 billion — higher than at the peak of 2021–2022. This was due to several large deals in the AI sector.
However, overall deal volume at these stages declined, indicating investor selectivity: high interest remains only in companies capable of justifying large rounds.
In the pre-seed/seed segment, fintech unexpectedly became the growth leader, with median valuations rising by ~15%, exceeding the pace of AI. In stages A and B, AI and life sciences grew the most (both by around 29%). At the same time, cleantech and biotech showed a decline in a number of stages compared to last year, reflecting the redistribution of capital in favor of technologies related to AI infrastructure.
A separate section of the report is devoted to the state of the “AI bubble.” Against the backdrop of record rounds in Europe, the share of down rounds (a situation where investors value a company lower in the next round than in the previous one) within AI increased to 15% of deals, compared to 12% in SaaS. The authors note that the high-profile problems of some companies have increased the demand for discipline and thorough verification of business models. The market is gradually emerging from the FOMO phase and moving towards stricter pricing expectations.
Overall, the report notes an important transition: European startups' valuations remain stable (especially in AI), but the market is shifting more toward rational selection. Growth continues, but under increasingly cautious conditions. The combination of high capital costs and overheating in certain segments is forming a new norm: a market where money is available, but much harder to obtain.
Harmonizing the past: EEG-based brain network unveil modality-specific mechanisms of nostalgia (Southwest University)
Researchers at Southwest University sought to understand why memories of the past so dramatically alter our emotional state and what exactly happens in the brain when a person experiences nostalgia.
Until now, it was unclear whether different sensory channels (sound, image, video) “produce” the same state of nostalgia. To study this, the authors proposed considering nostalgia both as a subjective experience and as a neurophysiological state. They compared three types of stimuli — audio, visual images, and audiovisual clips — and measured the participants' reactions not only through self-reports but also through EEG, analyzing the activity of brain networks in the alpha, theta, beta, and gamma ranges.
The results showed that nostalgic stimuli consistently enhanced positive emotions and subjective feeling of control. At the brain level, this manifested itself in increased connectivity and efficiency of the network in the alpha and gamma ranges and accelerated information transfer — that is, the brain began to work in a more organized and economized manner.
The reaction to sound was particularly pronounced: the audio channel caused the most significant changes in signal strength in the theta, alpha, and gamma ranges and higher activity in the prefrontal and central areas. Video sequences amplified the effect almost to the same extent as images, but were less effective than pure audio.
The authors interpret this as confirmation that nostalgia is not just an emotion, but a state in which the brain integrates information more easily and expends fewer cognitive resources. Music turns out to be the most powerful trigger: it triggers autobiographical memory, emotional regulation, and activates reward systems.
Reacting Like Humans: Incorporating Intrinsic Human Behaviors Into NAO Through Sound-Based Reactions to Fearful and Shocking Events for Enhanced Sociability (Sharif University of Technology)
A group of researchers from Italy, the US, and Iran decided to teach humanoid robots to "get scared". In doing so, they attempted to advance the solution to one of the key problems of modern social robotics — the unnatural behavior of machines.
To bring the robot's behavior closer to that of humans, the authors decided to teach robots to get scared by loud noises in a human-like way and developed a full-fledged “fear” response for the NAO model. The goal was not simply to classify the event, but to make the robot appear attentive to its surroundings — like a person who has been unexpectedly alerted to something.
To do this, the researchers collected three types of data: videos of human fear reactions, sets of loud household sounds, and images of objects that can produce them. Based on this, they trained three models: one for sound recognition, one for object search, and one for generating short, varied hand movements. Then they combined everything into a single system: NAO hears a loud sound, instantly flinches, turns in its direction, scans the surrounding space, and selects the most likely source.
The system was tested in three everyday scenarios and the recordings were shown to 70 participants, including robotics specialists and ordinary viewers. Both groups noted that this reaction looked much more natural than standard pre-programmed gestures and was closer to familiar patterns of human behavior. Experts particularly appreciated the consistency of movements — from the initial startle to the search for the object.
The experiment shows that the feeling of “naturalness” in social robots does not come from external resemblance to humans, but from small bodily reactions that occur quickly and automatically. Combining hearing, vision, and movement into one quick reaction makes the robot's behavior more lifelike and reduces the feeling of artificiality, which is important for its application in education, healthcare, and the service sector.
King Faisal University, University of Lagos, European University of Lefke
04. Learning within learning within learning
Nested Learning: The Illusion of Deep Learning Architecture (Google Research)
Google researchers have proposed revising the very logic by which we understand the functioning of neural networks. In their opinion, the key problem with modern LLMs is not a lack of depth or architecture size, but the fact that models are almost incapable of accumulating experience in a human-like way. After pre-training, they “freeze” in a state where they can continue to pick up context, but it is impossible to turn it into long-term memory without costly retraining.
To explain why this happens, the authors introduce a new paradigm: Nested Learning. In this paradigm, any model is viewed as a system of nested learning processes, each with its own update rhythm and context flow. In this paradigm, the architecture, optimizer, and internal states are different levels of memory that continuously compress the context into their parameters.
The researchers developed this idea in three directions. First, they showed that popular optimizers such as Adam or SGD can be viewed as memory modules — and proposed more “deep” optimizers with more complex internal dynamics. Second, they created the HOPE model, which learns not only to solve problems but also to update its own learning algorithm. Third, they developed the Continuum Memory System, a system of several memory levels responsible for different time horizons: from ultra-short to more stable.
The experimental part showed that HOPE is comparable to or surpasses modern transformers and recurrent architectures in language modeling and general reasoning tasks, even with the same model scale. It is particularly noteworthy that as HOPE increases in size, it does not lose stability and demonstrates a smoother increase in quality, which the authors interpret as confirmation of the hypothesis that increasing the complexity of memory and learning levels can have an effect comparable to increasing the number of layers.
For the industry, this rethinking of neural network “memory” potentially means the emergence of more compact models capable of stable, continuous learning.
05. How AI distinguishes between facts, knowledge, and beliefs
Belief in the Machine: Investigating Epistemological Blind Spots of Language Models (Stanford University, Duke University)
Researchers at Stanford and Duke tested the extent to which modern LLMs are able to distinguish between basic epistemic categories: fact, knowledge, and belief. To evaluate this, they created a new benchmark called KaBLE — 13,000 questions based on carefully selected theses from various domains of human knowledge.
Then, 15 models were run through a single protocol with fixed answer choices. This approach made it possible to observe not only the overall accuracy, but also how errors change depending on what the model is working with — truth, falsehood, or the subject's belief.
The results showed a systemic bias. LLMs handle true statements well, but they noticeably “break down” given false ones, especially when they have to work with the user's beliefs rather than facts: it is difficult for them to simply acknowledge “yes, the person believes this” if the content of the belief contradicts their knowledge, while they find it much easier to acknowledge false beliefs in third parties (“James believes that...”), than those of the user themselves (“I believe that...”), and they rely heavily on superficial linguistic markers such as “I know that...” instead of consistently distinguishing between fact, belief, and knowledge.
The authors believe that the source of these errors lies in the very logic of modern neural network training: models have been optimized for “correct” answers and factual corrections, rather than for accurately modeling beliefs that may be false. In real-world scenarios, such as medical consultations or legal expertise, this creates the risk of systematic distortion of statements: the model begins to replace human subjective experience with its own “knowledge of the world.”
The work highlights an important boundary: before implementing LLMs in sensitive contexts, we need to rethink the very framework for evaluating their “knowledge.” The model may know the facts, but it still has a poor understanding of people — and how our beliefs work.
Words Matter: Forecasting Economic Downside Risks with Corporate Textual Data (Brandeis University)
Researcher Cansu Isler from Brandeis University has suggested that, in order to predict macroeconomic risks, it is necessary to study not only markets and financial conditions, but also the language companies use to describe their current situation. The author examines whether the tone of corporate reports can serve as an early indicator of growing risks of a GDP decline.
Traditional Growth-at-Risk models rely solely on financial conditions and macroeconomic indicators, which are often inertial and respond with a delay. In contrast, text data captures the real-time concerns of businesses: management describes risks, uncertainties, demand pressures, and operational problems before they are reflected in aggregate indices.
Isler creates a new sentiment indicator based on text analysis of mandatory company reports. The tone of the reports is calculated using the Loughran–McDonald dictionary as the difference between positive and negative vocabulary relative to the same period a year ago. The firms' metrics are then aggregated into a weekly index weighted by market capitalization to give greater weight to the voices of large players. It turns out that this text indicator improves the prediction of risks of a sharp decline in GDP and does so significantly better than classic financial indicators such as the National Financial Conditions Index.
It signals growing negativity earlier, especially before economic slowdowns. Based on historical data, models with text indicators consistently outperformed models based solely on market data. Overall, the study expands the scope of the Growth-at-Risk (GaR) approach to analytics and demonstrates that corporate texts are not just incidental informational noise, but a significant source of macroeconomic signals. For central banks and regulators, this means the emergence of a new layer of early warning — an index that reflects the real mood of business, not just the public status quo.
07. Transformers are quite good at reconstructing medical histories
Learning the natural history of human disease with generative transformers (European Molecular Biology Laboratory, European Bioinformatics Institute)
Researchers from European laboratories and the University of Copenhagen have attempted to address one of the key limitations of LLM in medical prediction. Currently, most scoring AI systems are capable of making predictions for individual diseases, but they lack the tools to describe human health as a single, interconnected trajectory.
To overcome this limitation, the authors proposed treating disease progression in the same way that a language model treats text: as a sequence of tokens in which each new element depends on the context of the previous ones. They modified the transformer architecture and created the Delphi-2M model, a system that read medical history as a temporal sequence of diagnoses, conditions, and risk factors and learned to predict what would happen next and when. The model operated on a continuous timeline and simultaneously assessed the probability of more than a thousand diseases and the time to their onset.
During the experiments, the model showed high accuracy. Over long horizons, Delphi-2M reproduced age-related morbidity patterns and intergroup differences between smokers, patients with high BMI, or those with high alcohol consumption.
The synthetic trajectories were so realistic that the model, trained exclusively on them, was almost as good as the original, paving the way for private modeling of health trajectories without the use of real data.
The authors concluded that transformers are well suited for describing complex, temporal health trajectories and can serve as universal risk models — instead of producing disparate predictions for individual diseases.
For medicine, models such as Delphi open up the possibility of making long-term predictions about disease incidence, selecting patients for screening more accurately, and forming ideas about the future burden to be placed on healthcare systems. For AI researchers, the work demonstrates that LLM approaches are applicable not only to language but also to modeling human life as a sequential process — provided that biases and ethical constraints are rigorously analyzed.