What happens when trailblazing engineers and industry professionals team up? The answer may transform the future of computing efficiency for modern data centers.
Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel artificial intelligence (AI) model inspired by neural oscillations in the brain, with the goal of significantly advancing how machine learning algorithms handle long sequences of data.
A research team from the Skoltech AI Center proposed a new neural network architecture for generating structured curved coordinate grids, an important tool for calculations in physics, biology, and even finance. The study is published in the Scientific Reports journal.
It's easy to solve a 3x3 Rubik's cube, says Shantanu Chakrabartty, the Clifford W. Murphy Professor and vice dean for research and graduate education in the McKelvey School of Engineering at Washington University in St. Louis. Just learn and memorize the steps then execute them to arrive at the solution.
In a network, pairs of individual elements, or nodes, connect to each other; those connections can represent a sprawling system with myriad individual links. A hypergraph goes deeper: It gives researchers a way to model complex, dynamical systems where interactions among three or more individuals—or even among groups of individuals—may play an important part.
Despite significant advances in digital technologies, modern scientific results are still communicated using antiquated methods. In nearly 400 years, scientific literature has progressed from physically printed articles to PDFs, but these electronic documents are still text-based and therefore not machine-readable. This means your computer cannot interpret the information they contain without human assistance.
Delivery robots made by companies such as Starship Technologies and Kiwibot autonomously make their way along city streets and through neighborhoods.
A group of computer scientists at Microsoft Research, working with a colleague from the University of Chinese Academy of Sciences, has introduced Microsoft's new AI model that runs on a regular CPU instead of a GPU. The researchers have posted a paper on the arXiv preprint server outlining how the new model was built, its characteristics and how well it has done thus far during testing.
University of Waterloo researchers have developed new artificial intelligence (AI) technology that can accurately analyze pitcher performance and mechanics using low-resolution video of baseball games.
Quantum computers promise to speed calculations dramatically in some key areas such as computational chemistry and high-speed networking. But they're so different from today's computers that scientists need to figure out the best ways to feed them information to take full advantage. The data must be packed in new ways, customized for quantum treatment.
Essential for many industries ranging from Hollywood computer-generated imagery to product design, 3D modeling tools often use text or image prompts to dictate different aspects of visual appearance, like color and form. As much as this makes sense as a first point of contact, these systems are still limited in their realism due to their neglect of something central to the human experience: touch.
ChatGPT and alike often amaze us with the accuracy of their answers, but unfortunately, they also repeatedly give us cause for doubt. The main issue with powerful AI response engines (artificial intelligence) is that they provide us with perfect answers and obvious nonsense with the same ease. One of the major challenges lies in how the large language models (LLMs) underlying AI deal with uncertainty.
In a new Nature Communications study, researchers have developed an in-memory ferroelectric differentiator capable of performing calculations directly in the memory without requiring a separate processor.
It's obvious when a dog has been poorly trained. It doesn't respond properly to commands. It pushes boundaries and behaves unpredictably. The same is true with a poorly trained artificial intelligence (AI) model. Only with AI, it's not always easy to identify what went wrong with the training.
Cyberattacks can snare workflows, put vulnerable client information at risk, and cost corporations and governments millions of dollars. A botnet—a network infected by malware—can be particularly catastrophic. A new Georgia Tech tool automates the malware removal process, saving engineers hours of work and companies money.
---- End of list Tech Xplore Computer Science News Articles on this page 1 of 2 total pages ----