Whether you're a scientist brainstorming research ideas or a CEO hoping to automate a task in human resources or finance, you'll find that artificial intelligence (AI) tools are becoming the assistants you didn't know you needed. In particular, many professionals are tapping into the talents of semi-autonomous software systems called AI agents, which can call on AI at specific points to solve problems and complete tasks.
MIT researchers have developed a new method for designing 3D structures that can be transformed from a flat configuration into their curved, fully formed shape with only a single pull of a string.
Every task we perform on a computer—whether number crunching, watching a video, or typing out an article—requires different components of the machine to interact with one another. "Communication is massively crucial for any computation," says former SFI Graduate Fellow Abhishek Yadav, a Ph.D. scholar at the University of New Mexico. But scientists don't fully grasp how much energy computational devices spend on communication.
As language models (LMs) improve at tasks like image generation, trivia questions, and simple math, you might think that human-like reasoning is around the corner. In reality, they still trail us by a wide margin on complex tasks. Try playing Sudoku with one, for instance, where you fill in numbers one through nine in such a way that each appears only once across the columns, rows, and sections of a nine-by-nine grid. Your AI opponent will either fail to fill in boxes on its own or do so inefficiently, although it can verify if you've filled yours out correctly.
When scientists test algorithms that sort or classify data, they often turn to a trusted tool called Normalized Mutual Information (or NMI) to measure how well an algorithm's output matches reality. But according to new research, that tool may not be as reliable as many assume.
If you open a banking app, play a mobile game or scroll through a news feed every day while riding the bus, your commuting routine is probably bolstering your smartphone habit, according to new research that shows phone tendencies are stronger in locations chosen automatically.
Researchers at TU Wien have discovered an unexpected connection between two very different areas of artificial intelligence: Large Language Models (LLMs) can help solve logical problems—without actually "understanding" them.
Today's artificial intelligence models can't even tie their own shoes.
Most languages use word position and sentence structure to extract meaning. For example, "The cat sat on the box," is not the same as "The box was on the cat." Over a long text, like a financial document or a novel, the syntax of these words likely evolves.
Generative AIs may not be as creative as we assume. Publishing in the journal Patterns, researchers show that when image-generating and image-describing AIs pass the same descriptive scene back and forth, they quickly veer off topic.
A new approach is making it easier to visualize lifelike 3D environments from everyday photos already shared online, opening new possibilities in industries such as gaming, virtual tourism and cultural preservation.
When large language models (LLMs) make decisions about networking and friendship, the models tend to act like people, across both synthetic simulations and real-world network contexts.
Imagine having a continuum soft robotic arm bend around a bunch of grapes or broccoli, adjusting its grip in real time as it lifts the object. Unlike traditional rigid robots that generally aim to avoid contact with the environment as much as possible and stay far away from humans for safety reasons, this arm senses subtle forces, stretching and flexing in ways that mimic more of the compliance of a human hand. Its every motion is calculated to avoid excessive force while achieving the task efficiently.
Even networks long considered "untrainable" can learn effectively with a bit of a helping hand. Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have shown that a brief period of alignment between neural networks, a method they call guidance, can dramatically improve the performance of architectures previously thought unsuitable for modern tasks.
Developers can now integrate large language models directly into their existing software using a single line of code, with no manual prompt engineering required. The open-source framework, known as byLLM, automatically generates context-aware prompts based on the meaning and structure of the program, helping developers avoid hand-crafting detailed prompts, according to a conference paper presented at the SPLASH conference in Singapore in October 2025 and published in the Proceedings of the ACM on Programming Languages.
---- End of list Tech Xplore Computer Science News Articles on this page 2 of 2 total pages ----