The use of artificial intelligence (AI) agents, systems that learn to make predictions, generate content or tackle other tasks by analyzing large amounts of data, is becoming increasingly widespread. Some of these systems have become so advanced that they can also be combined in ways that allow them to interact with each other.
Large language models (LLMs) are dealing with an increasing amount of morally sensitive information as people turn to them for medical advice, companionship and therapy. However, they are not exactly known for possessing a moral compass.
It is not easy to bring new technologies from the laboratory to market. Researchers and companies face very different demands for new developments and do not always find common ground. Scientists at Empa and other institutions have analyzed two emerging solar cell technologies to identify the greatest risks. Their conclusion: Research and industry must start collaborating much earlier.
When a human says an event is "probable" or "likely," people generally have a shared, if fuzzy, understanding of what that means. But when an AI chatbot like ChatGPT uses the same word, it's not assessing the odds the way we do, my colleagues and I found.
The Care Bears taught a generation of kids that sharing is caring, but not everyone has carried this principle into adulthood. Researchers at Michigan State University have found a new angle to promote cooperation: artificial intelligence (AI). The results of this study, titled "Promoting cooperation in the public goods game using artificial intelligent agents," are published in npj Complexity.
Reasoning large language models (LLMs) are designed to solve complex problems by breaking them down into a series of smaller steps. These powerful models are particularly good at challenging tasks like advanced programming and multistep planning. But developing reasoning models demands an enormous amount of computation and energy due to inefficiencies in the training process. While a few of the high-power processors continuously work through complicated queries, others in the group sit idle.
When artificial intelligence systems began acing long-standing academic assessments, researchers realized they had a problem: the tests were too easy. Popular evaluations, such as the Massive Multitask Language Understanding (MMLU) exam, once considered formidable, are no longer challenging enough to meaningfully test advanced AI systems.
When making decisions and judgments, humans can fall into common "traps," known as cognitive biases. A cognitive bias is essentially the tendency to process information in a specific way or follow a systematic pattern. One widely documented cognitive bias is the so-called addition bias, the tendency of people to prefer solving problems by adding elements as opposed to removing them, even if subtraction would be simpler and more efficient. One example of this is adding more paragraphs or explanations to improve an essay or report, even if removing unnecessary sections would be more effective.
People who regularly use online services have between 100 and 200 passwords. Very few can remember every single one. Password managers are therefore extremely helpful, allowing users to access all their passwords with just a single master password.
Researchers from the Department of Computer Science at Bar-Ilan University and from NVIDIA's AI research center in Israel have developed a new method that significantly improves how artificial intelligence models understand spatial instructions when generating images—without retraining or modifying the models themselves. Image-generation systems often struggle with simple prompts such as "a cat under the table" or "a chair to the right of the table," frequently placing objects incorrectly or ignoring spatial relationships altogether. The Bar-Ilan research team has introduced a creative solution that allows AI models to follow such instructions more accurately in real time.
It happens every day—a motorist heading across town checks a navigation app to see how long the trip will take, but they find no parking spots available when they reach their destination. By the time they finally park and walk to their destination, they're significantly later than they expected to be.
By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that they're far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. However, it's not obvious exactly how these models represent abstract concepts to begin with from the knowledge they contain.
A University of Hawaiʻi at Mānoa student-led team has developed a new algorithm to help scientists determine direction in complex two-dimensional (2D) data, with potential applications ranging from particle physics to machine learning. The research was published in AIP Advances.
Just like each person has unique fingerprints, every CMOS chip has a distinctive "fingerprint" caused by tiny, random manufacturing variations. Engineers can leverage this unforgeable ID for authentication, to safeguard a device from attackers trying to steal private data.
Generative artificial intelligence systems often work in agreement, complimenting the user in its response. But human interactions aren't typically built on flattery. To help strengthen these conversations, researchers in the USF Bellini College of Artificial Intelligence, Cybersecurity and Computing are challenging the technology to think and debate in ways that resemble human reasoning.
---- End of list Tech Xplore Computer Science News Articles on this page 1 of 2 total pages ----