News (37830) Mash (23191) Engadget (14639)

Translate

Save $30 on a new-to-you iPad 7

Save $30 on a new-to-you iPad 7

ipad 7th generation with purple and blue background

TL;DR: As of Sept. 30, you can get a refurbished 7th gen iPad (WiFi, 32GB) for just $299.99, down from $329. That's about $30 in savings. But you might want to hurry, because this deal ends today.


Whoever said that buying refurbished tech is just like buying a worn-out hand-me-down gadget is simply misinformed. In reality, it's almost like buying brand-new. You may not be getting the latest model, and the product may have a few cosmetic marks here and there, but it's refurbished to function as good as new.

More often than not, refurbished items have been repaired and tested prior to being sold to ensure that they're in perfect working order. If there are any damaged or worn-out parts, they're replaced with original ones, resulting in a like-new gadget. And if you're in the market for an iPad, this refurbished iPad 7 may be worth your investment.

A tablet that's up to the task

A multifunctional device, this refurbished iPad 7 is a great choice for anyone who loves to read, surf the web, and play games — or maybe do all of these simultaneously. This 10.2-inch A10 Fusion powerhouse packs Apple's iconic Retina display for crystal-clear graphics and is powered by a four-core 2.33 GHz Apple A10 Fusion processor, meaning it's built to accommodate serious multitasking without any lags. It also claims to be equipped with a 10-hour battery life, allowing you to enjoy long hours of surfing, streaming, and playing.

With an 8MP back camera, 1.2MP FaceTime HD front camera, and stereo speakers, it makes for a great entertainment companion or a more mobile computer you can take anywhere. With its 32GB hard drive, you should have no problem storing essential files for the long haul.

Save on a new-to-you iPad

The iPad is not exactly known for being affordable, but this deal makes it more accessible. Instead of shelling out $329, you can snap up this refurbished iPad 7 for only $299.99 through September 30.

Prices subject to change.


via IFmashable.com
Facebook and Instagram are officially NFT-positive in the U.S.

Facebook and Instagram are officially NFT-positive in the U.S.

instagram logo

Facebook and Instagram have completed their NFT rollout across both apps for all users in the U.S.

On Thursday, Meta announced that all users in the U.S. can connect their digital wallets to Facebook and Instagram to share and cross-post their NFTs. This comes as one of the final steps of the official NFT rollout for the Meta-owned apps. Instagram announced a plan to introduce in-app NFTs in May and the same happened on Facebook in June.

Users in the other 100 countries where digital collectables are available can access the feature on Instagram, but not on Facebook just yet.

"Today we’re announcing everyone on Facebook and Instagram in the U.S. can now connect their wallets and share their digital collectibles," Meta announced in a post updated on Sept. 29. "This includes the ability for people to cross-post digital collectibles that they own across both Facebook and Instagram."

This is one of the rare times in which it doesn't seem like Meta's new feature is a carbon copy of another app. Lately, each update Meta makes seems to be a transparent imitation of its competitors' features: like remixing on Reels; repost features; QR codes, and more. Other brands have, of course, worked NFTs into their business models — like Walmart, Nike, Coca-Cola, and Twitter — but this move doesn't seem like an exact copy of another social media app.

Good for you, Meta. I guess?


via IFmashable.com
Magic Leap's smaller, lighter second-gen AR glasses are now available

Magic Leap's smaller, lighter second-gen AR glasses are now available

Magic Leap's second take on augmented reality eyewear is available. The company has started selling Magic Leap 2 in 19 countries, including the US, UK and EU nations. The glasses are still aimed at developers and pros, but they include a number of design upgrades that make them considerably more practical — and point to where AR might be headed.

The design is 50 percent smaller and 20 percent lighter than the original. It should be more comfortable to wear over long periods, then. Magic Leap also promises better visibility for AR in bright light (think a well-lit office) thanks to "dynamic dimming" that makes virtual content appear more solid. Lens optics supposedly deliver higher quality imagery with easier-to-read text, and the company touts a wider field of view (70 degrees diagonal) than comparable wearables.

You can expect decent power that includes a quad-core AMD Zen 2-based processor in the "compute pack," a 12.6MP camera (plus a host of cameras for depth, eye tracking and field-of-view) and 60FPS hand tracking for gestures. You'll only get 3.5 hours of non-stop use, but the 256GB of storage (the most in any dedicated AR device, Magic Leap claims) provides room for more sophisticated apps.

As you might guess, this won't be a casual purchase. The Magic Leap 2 Base model costs $3,299, while developers who want extra tools, enterprise features and early access for internal use will want to pay $4,099 for the Developer Pro edition. Corporate buyers will want to buy a $4,999 Enterprise model that includes regular, managed updates and two years of business features.

You won't buy this for personal use as a result. This is more for healthcare, industry, retail and other spaces where the price could easily be offset by profits. However, it joins projects from Qualcomm, Google and others in showing where AR technology is going. Where early tech tended to be bulky and only ideal for a narrow set of circumstances, hardware like Magic Leap 2 appears to be considerably more usable in the real world.


via engadget.com
Mirror is rebranding as Lululemon Studio — and offering a sweet $700 off deal

Mirror is rebranding as Lululemon Studio — and offering a sweet $700 off deal

Woman doing yoga in front of a fitness mirror

SAVE $700: If you've been holding out for a good deal to buy a fitness mirror, you're in luck. Mirror is offering $700 off its flagship product ahead of a big rebrand. Grab one as of Sept. 30 for just $795 with code LLSTUDIO700


Mirror is officially rebranding as Lululemon Studio come Oct. 5, but the fitness mirror brand is offering a steep $700 off deal ahead of the rollout.

This is one of the biggest discounts we've ever seen on the Mirror. For reference, last year's Black Friday deal was $500 off and offered free delivery and installation (a $250 value). With this deal, you'll still get free delivery, but the professional installation isn't included.

The Mirror rebrand will bring tons of new content to the digital platform. Mirror announced eight new partners: AARMY, Y7 Studio, DOGPOUND, FORWARD_Space, PureBarre, Rumble, AKT and YogaSix. These fitness partners will start releasing new classes on the Lululemon Studio platform beginning Oct. 5, with new classes added every week. This content will be in addition to the current library of Mirror fitness classes already available on the platform.

Some new perks are being rolled out with the rebrand, too. Lululemon Studio members will score 10% off Lululemon apparel and gear, 20% off in-person classes at select in-person partner fitness studios, and free classes at select Lululemon locations. The Lululemon Studio membership will run the same price as the current Mirror membership — $39 per month. Existing Mirror members will become Lululemon Studio members following the launch.

In a press release, Nikki Neuburger, Chief Brand Officer of Lululemon said that Lululemon customer "fitness needs have evolved" and that the Lululemon Studio rebrand is a shift to hybrid at-home and in-person workouts as more people venture outside their homes on a day-to-day basis.


via IFmashable.com
Anker's Soundcore Liberty 4 earbuds can monitor your heart rate

Anker's Soundcore Liberty 4 earbuds can monitor your heart rate

Anker's Soundcore audio brand has revealed yet more products. Among them are the Liberty 4 earbuds, which can track your heart rate. The heart rate sensor is in the right earbud, so you'll need to wear that one to use the feature. When it's measuring your blood oxygen levels, the earbud will emit a red light. Soundcore hasn't disclosed the waterproof rating, which is odd given that heart-rate tracking functions are closely linked to workouts.

Soundcore says an algorithm can tune the spatial audio function depending on whether you're watching a movie or listening to music. The earbuds offer dynamic head tracking too. Soundcore is using a gyroscope to ensure sound always surrounds you. In addition, Liberty 4 offers adaptive noise canceling (which automatically adjusts noise cancellation levels based on environmental audio) and personalized sound.

You'll get up to nine hours of use on a single charge, Soundcore claims, and 28 hours in total before you need to top up the charging case's battery. These figures drop to five and 15 hours with spatial audio on, and seven and 24 hours when ANC is enabled. That said, Soundcore says you'll get up to three hours of use after charging for 15 minutes.

In addition, there's multipoint connectivity, so you can pair Liberty 4 to your computer and phone at the same time over Bluetooth. The $150 earbuds come in white or black colorways. You can buy Liberty 4 direct from Soundcore now and other retailers in October.

Anker Sleep A10 earbuds
Soundcore

Soundcore has also unveiled new sleep earbuds. It says the Sleep A10 buds can block out up to 35dB of noise thanks to a four-point noise masking system.

Unlike Bose Sleepbuds 2, which only allow you to listen to sleep sounds from a certain app, you can play any audio through Sleep A10 via Bluetooth. Soundcore says its earbuds have dynamic drivers designed to deliver low-frequency sound that induces sleep. Crucially, the earbuds are seemingly comfortable for folks who sleep on their side. They have ear wings and twin seal ear tips to help keep them snug in your ears during the night.

Other features include sleep monitoring and a personal alarm clock. Anker claims the buds have a battery life of up to 10 hours, so they should be able to help wake you up in addition to lulling you to sleep. The Sleep A10 buds, which cost $69 less than Bose's Sleepbuds 2, are available from Soundcore's website for $180.

Anker Sleep A10 earbuds
Soundcore

via engadget.com
Boston Dynamics’ Spot could be the newest first responder in emergency situations

Boston Dynamics’ Spot could be the newest first responder in emergency situations

A Boston Dynamics Spot robot dog putting a fire out with a fire extinguisher.

Ontario Power Generation has partnered with Ontario Tech University to test the capabilities of a Boston Dynamics Spot robot dog to improve safety in the nuclear power sector. The robot can be sent on autonomous missions, conduct visual inspections, and even act as a first responding firefighter in the event of an emergency.


via IFmashable.com
Meta reportedly suspends all hiring, warns staff of possible layoffs

Meta reportedly suspends all hiring, warns staff of possible layoffs

As with many other industries, the tech sector has been feeling the squeeze of the global economic slowdown this year. Meta isn't immune from that. Reports in May suggested that the company would slow down the rate of new hires this year. Now, Bloomberg reports that Meta has put all hiring on hold. CEO Mark Zuckerberg is also said to have told staff that there's likely more restructuring and downsizing on the way. 

Meta declined to comment on the report. The company directed Engadget to a comment that Zuckerberg made during Meta's most recent earnings call in July. “Given the continued trends, this is even more of a focus now than it was last quarter," Zuckerberg said. "Our plan is to steadily reduce headcount growth over the next year. Many teams are going to shrink so we can shift energy to other areas, and I wanted to give our leaders the ability to decide within their teams where to double down, where to backfill attrition, and where to restructure teams while minimizing thrash to the long-term initiatives.”

In that earnings report, Meta disclosed that, in the April-May quarter, its revenue dropped by one percent year-over-year. It's the first time the company has ever reported a fall in revenue.

Developing...


via engadget.com

AI is already better at lip reading that we are

They Shall Not Grow Old, a 2018 documentary about the lives and aspirations of British and New Zealand soldiers living through World War I from acclaimed Lord of the Rings director Peter Jackson, had its hundred-plus-year-old silent footage modernized through both colorization and the recording of new audio for previously non-existent dialog. To get an idea of what the folks featured in the archival footage were saying, Jackson hired a team of forensic lip readers to guesstimate their recorded utterances. Reportedly, “the lip readers were so precise they were even able to determine the dialect and accent of the people speaking.”

“These blokes did not live in a black and white, silent world, and this film is not about the war; it’s about the soldier’s experience fighting the war,” Jackson told the Daily Sentinel in 2018. “I wanted the audience to see, as close as possible, what the soldiers saw, and how they saw it, and heard it.”

That is quite the linguistic feat given that a 2009 study found that most people can only read lips with around 20 percent accuracy and the CDC’s Hearing Loss in Children Parent’s Guide estimates that, “a good speech reader might be able to see only 4 to 5 words in a 12-word sentence.” Similarly, a 2011 study out of the University of Oklahoma saw only around 10 percent accuracy in its test subjects.

“Any individual who achieved a CUNY lip-reading score of 30 percent correct is considered an outlier, giving them a T-score of nearly 80 three times the standard deviation from the mean. A lip-reading recognition accuracy score of 45 percent correct places an individual 5 standard deviations above the mean,” the 2011 study concluded. “These results quantify the inherent difficulty in visual-only sentence recognition.”

For humans, lip reading is a lot like batting in the Major Leagues — consistently get it right even just three times out of ten and you’ll be among the best to ever play the game. For modern machine learning systems, lip reading is more like playing Go — just round after round of beating up on the meatsacks that created and enslaved you — with today’s state-of-the-art systems achieving well over 95 percent sentence-level word accuracy. And as they continue to improve, we could soon see a day where tasks from silent-movie processing and silent dictation in public to biometric identification are handled by AI systems.

Context matters

it's a statue
Wikipedia / Public Domain

Now, one would think that humans would be better at lip reading by now given that we’ve been officially practicing the technique since the days of Spanish Benedictine monk, Pedro Ponce de León, who is credited with pioneering the idea in the early 16th century.

“We usually think of speech as what we hear, but the audible part of speech is only part of it,” Dr. Fabian Campbell-West, CTO of lip reading app developer, Liopa, told Engadget via email. “As we perceive it, a person's speech can be divided into visual and auditory units. The visual units, called visemes, are seen as lip movements. The audible units, called phonemes, are heard as sound waves.”

“When we're communicating with each other face-to-face is often preferred because we are sensitive to both visual and auditory information,” he continued. “However, there are approximately three times as many phonemes as visemes. In other words, lip movements alone do not contain as much information as the audible part of speech.”

“Most lipreading actuations, besides the lips and sometimes tongue and teeth, are latent and difficult to disambiguate without context,” then-Oxford University researcher and LipNet developer, Yannis Assael, noted in 2016, citing Fisher’s earlier studies. These homophemes are the secret to Bad Lip Reading’s success.

What’s wild is that Bad Lip Reading will generally work in any spoken language, whether it’s pitch-accent like English or tonal like Vietnamese. “Language does make a difference, especially those with unique sounds that aren't common in other languages,” Campbell-West said. “Each language has syntax and pronunciation rules that will affect how it is interpreted. Broadly speaking, the methods for understanding are the same.”

“Tonal languages are interesting because they use the same word with different tone (like musical pitch) changes to convey meaning,” he continued. “Intuitively this would present a challenge for lip reading, however research shows that it's still possible to interpret speech this way. Part of the reason is that changing tone requires physiological changes that can manifest visually. Lip reading is also done over time, so the context of previous visemes, words and phrases can help with understanding.”

“It matters in terms of how good your knowledge of the language is because you're basically limiting the set of ambiguities that you can search for,” Adrian KC Lee, ScD, Professor and Chair of the Speech and Hearing Sciences Department, Speech and Hearing Sciences at University of Washington, told Engadget. “Say, ‘cold; and ‘hold,’ right? If you just sit in front of a mirror, you can't really tell the difference. So from a physical point of view, it's impossible, but if I'm holding something versus talking about the weather, you, by the context, already know.”

In addition to the general context of the larger conversion, much of what people convey when they speak comes across non-verbally. “Communication is usually easier when you can see the person as well as hear them,” Campbell-West said, “but the recent proliferation of video calls has shown us all that it's not just about seeing the person there's a lot more nuance. There is a lot more potential for building intelligent automated systems for understanding human communication than what is currently possible.”

Missing a forest for the trees, linguistically

While human and machine lip readers have the same general end goal, the aims of their individual processes differ greatly. As a team of researchers from Iran University of Science and Technology argued in 2021, “Over the past years, several methods have been proposed for a person to lip-read, but there is an important difference between these methods and the lip-reading methods suggested in AI. The purpose of the proposed methods for lip-reading by the machine is to convert visual information into words… However, the main purpose of lip-reading by humans is to understand the meaning of speech and not to understand every single word of speech.”

In short, “humans are generally lazy and rely on context because we have a lot of prior knowledge,” Lee explained. And it’s that dissonance in process — the linguistic equivalent of missing a forest for the trees — that presents such a unique challenge to the goal of automating lip reading.

“A major obstacle in the study of lipreading is the lack of a standard and practical database,” said Hao. “The size and quality of the database determine the training effect of this model, and a perfect database will also promote the discovery and solution of more and more complex and difficult problems in lipreading tasks.” Other obstacles can include environmental factors like poor lighting and shifting backgrounds which can confound machine vision systems, as can variances due the speaker’s skin tone, the rotational angle of their head (which shifts the viewed angle of the mouth) and the obscuring presence of wrinkles and beards.

As Assael notes, “Machine lipreading is difficult because it requires extracting spatiotemporal features from the video (since both position and motion are important).” However, as Mingfeng Hao of Xinjiang University explains in 2020’s A Survey on Lip Reading Technology, “action recognition, which belongs to video classification, can be classified through a single image.” So, “while lipreading often needs to extract the features related to the speech content from a single image and analyze the time relationship between the whole sequence of images to infer the content.“ It’s an obstacle that requires both natural language processing and machine vision capabilities to overcome.

Acronym soup

Today, speech recognition comes in three flavors, depending on the input source. What we’re talking about today falls under Visual Speech Recognition (VSR) research — that is, using only visual means to understand what is being conveyed. Conversely, there’s Automated Speech Recognition (ASR) which relies entirely on audio, ie “Hey Siri,” and Audio-Visual Automated Speech Recognition (AV-ASR), which incorporates both audio and visual cues into its guesses.

“Research into automatic speech recognition (ASR) is extremely mature and the current state-of the-art is unrecognizable compared to what was possible when the research started,” Campbell-West said. “Visual speech recognition (VSR) is still at the relatively early stages of exploitation and systems will continue to mature.” Liopa’s SRAVI app, which enables hospital patients to communicate regardless of whether they can actively verbalize, relies on the latter methodology. “This can use both modes of information to help overcome the deficiencies of the other,” he said. “In future there will absolutely be systems that use additional cues to support understanding.”

“There are several differences between VSR implementations,” Campbell-West continued. “From a technical perspective the architecture of how the models are built is different … Deep-learning problems can be approached from two different angles. The first is looking for the best possible architecture, the second is using a large amount of data to cover as much variation as possible. Both approaches are important and can be combined.”

In the early days of VSR research, datasets like AVLetters had to be hand-labeled and -categorized, a labor-intensive limitation that severely restricted the amount of data available for training machine learning models. As such, initial research focused first on the absolute basics — alphabet and number-level identification — before eventually advancing to word- and phrase-level identification, with sentence-level being today’s state-of-the-art which seeks to understand human speech in more natural settings and situations.

In recent years, the rise of more advanced deep learning techniques, which train models on essentially the internet at large, along with the massive expansion of social and visual media posted online, have enabled researchers to generate far larger datasets, like the Oxford-BBC Lip Reading Sentences 2 (LRS2), which is based on thousands of spoken lines from various BBC programs. LRS3-TED gleaned 150,000 sentences from various TED programs while the LSVSR (Large-Scale Visual Speech Recognition) database, among the largest currently in existence offers 140,000 hours of audio segments with 2,934,899 speech statements and over 127,000 words.

And it’s not just English: Similar datasets exist for a number of languages such as HIT-AVDB-II, which is based on a set of Chinese poems, or IV2, a French database composed of 300 people saying the same 15 phrases. Similar sets exist too for Russian, Spanish and Czech-language applications.

Looking ahead

VSR’s future could wind up looking a lot like ASR’s past, says Campbell-West, “There are many barriers for adoption of VSR, as there were for ASR during its development over the last few decades.” Privacy is a big one, of course. Though the younger generations are less inhibited with documenting their lives on line, Campbell-West said, “people are rightly more aware of privacy now then they were before. People may tolerate a microphone while not tolerating a camera.”

Regardless, Campbell-West remains excited about VSR’s potential future applications, such as high-fidelity automated captioning. “I envisage a real-time subtitling system so you can get live subtitles in your glasses when speaking to someone,” Campbell-West said. “For anyone hard-of-hearing this could be a life-changing application, but even for general use in noisy environments this could be useful.”

“There are circumstances where noise makes ASR very difficult but voice control is advantageous, such as in a car,” he continued. “VSR could help these systems become better and safer for the driver and passengers.”

On the other hand, Lee, whose lab at UW has researched Brain-Computer Interface technologies extensively, sees wearable text displays more as a “stopgap” measure until BCI tech further matures. “We don't necessarily want to sell BCI to that point where, ‘Okay, we're gonna do brain-to-brain communication without even talking out loud,’“ Lee said. “In a decade or so, you’ll find biological signals being leveraged in hearing aids, for sure. As little as [the device] seeing where your eyes glance may be able to give it a clue on where to focus listening.”

“I hesitate to really say ‘oh yeah, we're gonna get brain-controlled hearing aids,” Lee conceded. “I think it is doable, but you know, it will take time.”


via engadget.com
Native Instruments pads out its Komplete 14 suite with some welcome new toys

Native Instruments pads out its Komplete 14 suite with some welcome new toys

Komplete, Native Instruments' flagship music production bundle, has a little bit of everything. That's always been part of its appeal. It's pricey, but you get monstersynths, a top-notch drum sampler, a virtual guitar rig and Kontakt — which is also a sampler, but calling it one seems incredibly reductive. Native Instruments is still one of the biggest names in the music software world, but it's an increasingly crowded and competitive market. And much of it is moving towards a subscriptionmodel (even Native Instruments). So this year the company is adding some new software in hopes that customers will come back for at least one more big-ticket purchase. 

Komplete 14 is the first version to be released since Native Instruments (NI) joined Soundwide, a collection of brands including Izotope and Plugin Alliance, among others. As such, one of the biggest additions to the Komplete library (at least in the $599 Standard version and higher) is Izotope's Ozone 10 Standard. This mastering plug in has legions of fans thanks to its powerful feature set and simple interface. But for many, the biggest selling point is it's AI-powered mastering assistance. Many amateur musicians (myself included) rely on Ozone to master their tracks. You simply play the loudest bit of your song, click a button, and the plugin will suggest a starting point for mastering including compression and EQ. You can then accept the settings, tweak them to your liking or toss them and start from scratch. 

Native Instruments Komplete 14
Native Instruments

The new partnership also allowed Native Instruments to beef up the bundle with a handful of smaller items from Plugin Alliance and Brainworx like bx_Oberhausen, bx_Crispytuner and LO-FI-AF. None of these instruments or effects individually are likely to convince you one way or another that Komplete's more expensive versions are worth the outlay. But I don't know anyone who is going to complain about having too many plugins. 

The only issue is it might not be immediately clear to many users how to get access to those. They're not in the Native Access manager. Instead you'll have to go to your products and serials list on the NI site to get the "Plugin Alliance Bundle for Komplete 14" code. Then you'll head on over to Plugin Alliance and redeem it the code and download a separate plugin manager. Hopefully at some point the two platforms will be integrated to remove the addition steps.

Native Instruments Komplete 14
Native Instruments

The big centerpiece of Komplete, as always, is Kontakt. The new version — seven — isn't a giant departure for this industry stalwart. The browser has been updated for better compatibility with HiDPI displays and improved search and filtering tools. The factory library has also been overhauled to take advantage of the graphical overhaul and for better sound. The process of building your own Kontakt instruments has also been simplified with improved creator tools. 

Kontakt 7 may not be a significant change from version six, but if you spring for the more expensive versions, like the $1,199 Komplete Ultimate or $1,799 Komplete Collector's Edition, you do get some unique and powerful expansions like Lores, Ashlight, Kinetic Toys and, one of my personal favorites, Piano Colors. The latter combines samples of a grand piano, various synths and textures, along with effects and modulation tools to create complex sounds that walk the line between organic and synthetic.

Native Instruments Komplete 14
Native Instruments

The one piece of bad news here is that Kontakt 7, while it is technically included in Komplete 14, isn't available yet and wont ship until some point in October. Komplete 14 is available now starting at $199 for the basic Komplete Selects package, and going all the way up to $1,799 for Komplete Collector's Edition. 


via engadget.com
James Webb and Hubble telescope images capture DART asteroid collision

James Webb and Hubble telescope images capture DART asteroid collision

NASA made history this week after an attempt to slam its DART (Double Asteroid Redirection Test) spacecraft into an asteroid nearly 7 million miles away proved successful. While NASA shared some close-up images of the impact, it observed the planetary defense test from afar as well, thanks to the help of the James Webb and Hubble space telescopes. On the surface, the images aren't exactly the most striking things we've seen from either telescope, but they could help reveal a lot of valuable information.

This was the first time that Hubble and JSWT have observed the same celestial target simultaneously. While that was a milestone for the telescopes in itself, NASA suggests the data they captured will help researchers learn more about the history and makeup of the solar system. They'll be able to use the information to learn about the surface of Dimorphos (the asteroid in question), how much material was ejected after DART crashed into it and how fast that material was traveling.

JWST and Hubble picked up different wavelengths of light (infrared and visible, respectively). NASA says that being able to observe data from multiple wavelengths will help scientists figure out if big chunks of material left Dimorphos' surface or if it was mostly fine dust. This is an important aspect of the test, as the data can help researchers figure out if crashing spacecraft into an asteroid can change its orbit. The ultimate aim is to develop a system that can divert incoming asteroids away from Earth.

NASA says that JWST picked up images of "a tight, compact core, with plumes of material appearing as wisps streaming away from the center of where the impact took place." JWST, which captured 10 images over five hours, will continue to collect spectroscopic data from the asteroid system in the coming months to help researchers better understand the chemical composition of Dimorphos. NASA shared a timelapse GIF of the images that JWST captured. 

This animation, a timelapse of images from NASA’s James Webb Space Telescope, covers the time spanning just before impact at 7:14 p.m. EDT, Sept. 26, through 5 hours post-impact. Plumes of material from a compact core appear as wisps streaming away from where the impact took place. An area of rapid, extreme brightening is also visible in the animation.
NASA/ESA/CSA/Cristina Thomas (Northern Arizona University)/Ian Wong (NASA-GSFC)/Joseph DePasquale (STScI)

At around 14,000 MPH, Dimorphos was traveling at a speed over three times faster than JWST was originally designed to track. However, the telescope's flight operations, planning and science teams were able to develop a way to capture the impact.

As for Hubble, the 32-year-old telescope's Wide Field Camera 3 captured its own images of the collision. "Ejecta from the impact appear as rays stretching out from the body of the asteroid," according to NASA. The agency noted that some of the rays appear curved, and astronomers will have to examine the data to gain a better understanding of what that may mean.

These images from NASA’s Hubble Space Telescope, taken (left to right) 22 minutes, 5 hours, and 8.2 hours after NASA’s Double Asteroid Redirection Test (DART) intentionally impacted Dimorphos, show expanding plumes of ejecta from the asteroid’s body. The Hubble images show ejecta from the impact that appear as rays stretching out from the body of the asteroid. The bolder, fanned-out spike of ejecta to the left of the asteroid is in the general direction from which DART approached.
NASA/ESA/Jian-Yang Li (PSI)/Alyssa Pagan (STScI)

According to their initial findings, though, the brightness of the asteroid system increased threefold after impact. That level of brightness stayed the same for at least eight hours. Hubble captured 45 images immediately before and after DART's impact. It will observe the asteroid system 10 additional times over the next few weeks.

It took 10 months for DART, which is about the size of a vending machine, to reach Dimorphos. The football stadium-sized asteroid was around 6.8 million miles away from Earth when DART rammed into it. Pulling off an experiment like that is no mean feat. The learnings scientists gain from the test may prove invaluable.


via engadget.com
Google Maps will help you discover a neighborhood's 'vibe'

Google Maps will help you discover a neighborhood's 'vibe'

Google may soon give you a feel for a city district before you've ever set foot in it. The company is introducing a "neighborhood vibe' feature for Maps on Android and iOS that will help you learn what's new and worth seeing in a particular area through info and imagery. You may discover a historic quarter full of landmarks and museums, or the hottest restaurants in the chic part of town.

The technology relies on a blend of AI with community contributions to Google Maps' landscape, such as photos and reviews. If all goes well, the feature will evolve in sync with the neighborhood itself.

The vibe check will roll out to Maps users worldwide in the "coming months." No, this won't make you as knowledgeable as a resident. However, it might help you plan a vacation or move — instead of searching blindly for things to do, you'll have a decent sense of what's popular with locals.


via engadget.com
Google Lens image and text multisearch will soon be available in more languages

Google Lens image and text multisearch will soon be available in more languages

Multisearch, a Google Lens feature that can search images and text simultaneously, will soon be more broadly available after arriving in the US as a beta earlier this year. Google says multisearch will expand to more than 70 languages in the coming months. The company made the announcement at an event focused on Search.

In addition, the Near Me feature, which Google unveiled at I/O back in May, will land in the US in English sometime this fall. This ties into multisearch, with the idea of making it easier for folks to find out more details about local businesses. 

Multisearch is largely about enabling people to point their camera at something and ask about it while they're using the Google app. You could aim your phone at a store and request details about it, for instance, or ask about a screenshot of any unfamiliar item, like an item of clothing. You could also look up what a certain food item is called, like soup dumplings or laksa, and see what restaurants around you offer it.

Also on the Lens front, there will be some changes when it comes to augmented reality translations. Google is now employing the same artificial intelligence tech it uses for the Pixel 6's Magic Eraser feature to make it appear like it's replacing the original text, instead of superimposing the translation on top. The idea is to make translations look more seamless and natural.

Google is also adding shortcuts to the bottom of the search bar in its iOS app, so you'll more easily find features like translating text with your camera, hum to search and translating text in screenshots.


via engadget.com

Here are the new features Amazon is adding to Alexa

While new gadgets tend to dominate Amazon's annual Devices and Services Event, the company still has a few upgrades planned for its ubiquitous digital assistant. So here are all the fresh features and skills Amazon is planning to add to Alexa. 

For people trying to shop for a new outfit, the Echo Show is getting an AI-based skill that allows it to more easily search for clothes using a customer's references or specific characteristics. For example, Amazon says you can ask things like "Alexa, show me the one-shoulder top." Amazon explained the skill was created using the Alexa Teacher Model, which was trained using images and captions sourced from the company's product database. 

In the car, Alexa is also getting a new Roadside Assistance feature that will connect you with an agent in case you need do something like calling a tow truck or get help changing a flat tire. On top of that, BMW is expanding its partnership with Amazon, with BMW announcing plans to build its next-generation voice assistant using the Alexa Custom Assistant solution. BMW's goal is to support more natural language controls that are easy to use while driving. 

Alexa is also getting integration with the new Halo Rise, allowing it to do things like automatically turn off your lights when you get in bed or play your favorite song to help you wake up in the morning. Amazon will also be adding the Fire TV experience to the Echo Show 15, so users will be able to watch all their favorite shows or purchased content on a smaller screen. There's also a new Alexa Voice Remote Pro for Fire TVs, that allows you to more easily switch between various inputs, control routines and use your voice to find the remote if you lose it thanks to the controller's built-in speaker. 

Meanwhile for Disney fans, Amazon is adding a new "Hey Disney" command that gives anyone with a Kids+ subscription access to immersive entertainment experiences featuring big-name Disney characters. 

Follow all of the news from Amazon's event right here!


via engadget.com
Amazon's new Fire TV Cube can control your cable box

Amazon's new Fire TV Cube can control your cable box

Amazon's Fire TV Cube has always been a bit of a curiosity. Clearly, the company wanted to combine an Echo Dot with a Fire TV streaming player, but it took a few tries before we genuinely liked it. Now with the third-generation Fire TV Cube, Amazon is giving it a more premium sheen with a cloth-covered design, a more powerful 2GHz octa-core processor, and an HDMI input connection for plugging in your cable box. Doing so will let you tune the Fire TV Cube to specific channels with voice commands—you know, for those of you who can't let your local sports go.

Given that new hardware, Amazon says the Fire TV Cube will feel much faster than before. It's also the first streamer on the market to include support for WiFi 6E, which should help when you're dealing with huge 4K streams. When it comes to older content, Amazon has also included Super Resolution support for upscaling HD video into 4K. It's unclear if that will actually help older content look better, but we're looking forward to testing it out.

In addition to the $140 Fire TV Cube, Amazon also announced the $35 Alexa Voice Remote Pro, which is unfortunately sold separately. It features a backlight and programmable buttons for launching your favorite streaming apps. Perhaps most useful though? There's a Remote Finder feature, which allows you to ask Alexa to trigger a noise in case the Remote Pro gets stuck in your couch. That's one big advantage it has over Apple's easy-to-lose Apple TV remote.

Amazon Alexa Voice Remote Pro
Amazon

Follow all of the news from Amazon's event right here!


via engadget.com
Amazon is turning the Echo Show 15 into a Fire TV

Amazon is turning the Echo Show 15 into a Fire TV

It's Amazon's turn to host a major fall hardware event, and the company took the opportunity to announce some news for the Echo Show 15. It will bring the Fire TV experience to the smart display for both new and existing owners of the device as a free update.

The move makes a lot of sense when you consider that over 70 percent of Echo Show 15 users watched videos on the device last month, according to Amazon. The company says users will be able to start playing shows, movies and live TV with Alexa voice commands, as well as through touch control. You'll have the option of pairing the third-gen Fire TV Alexa Voice Remote to Echo Show 15 too. A new Fire TV widget will include shortcuts to recently used streaming apps, content you watched lately and your watchlist.

Follow all of the news from Amazon's event right here!


via engadget.com
Apple's vibrant M1 iMac returns to its best price of $1,349.99

Apple's vibrant M1 iMac returns to its best price of $1,349.99

Mac sitting at desk working on yellow Apple iMac 24-inch computer

Save $100: As of Sept. 28, the 24-inch Apple iMac (M1 chip, 8-core GPU, 256GB) is back at its all-time low of $1,349.99. The 7% discount will automatically apply during checkout.


Looking for that perfect family desktop computer that offers a little more vibrant personality? Bearing the Mashable Choice mark of approval, the 24-inch Apple iMac turned heads with its 2021 release. And it requires attention again with the return of its best discount.

The M1-powered Apple iMac is back on sale for $1,349.99 at Amazon, which matches its previous all-time low price. The $100 discount is automatically applied once you add it to your cart and complete checkout. And as of now, the discount applies to four of the color options including purple, yellow, orange, and pink.

One of the main benefits of the iMac over any MacBook is you get the full power of Apple's M1 chip with a larger 24-inch screen with a stunning 4.5K Retina display. That offers over a billion colors with 11.3 million pixels and up to 500 nits of brightness for clearer images. That makes it a great choice if you spend a lot of time editing photos or want a better viewing experience for streaming movies.

And the iMac can keep up with more than just watching movies. The M1 chip delivers high performance along with the 8-core CPU, 8-core GPU, and 8GB of RAM. You also get 256GB of internal SSD storage plus a 1080p FaceTime HD camera and three studio-quality mics for better video calls. And even though it's a desktop, the iMac is still thin and compact at just 11.5mm and under 10 pounds.


via IFmashable.com
The M2 MacBook Air is worth the upgrade, and it's on sale for $150 off

The M2 MacBook Air is worth the upgrade, and it's on sale for $150 off

person holding macbook air and standing at a keyboard

Save $150: As of Sept. 28, the M2 MacBook Air with 512GB of built-in storage is 10% off Amazon and Best Buy, bringing it to a new low price of $1,349, down from $1,499.


Even those of us working on the crustiest and dustiest laptops out there can find it hard to justify an upgrade to the latest tech if it fails to offer much improvement while very much offering that brand new price markup.

That's not the case with Apple's latest MacBook Air powered by the M2 chip. Yes, in many cases, we'd recommend opting for an older model (the M1 is still great) if you're trying to save some cash. But as of Sept. 28, you can get the M2 MacBook Air with 512GB of built-in storage for just $1,349 at Amazon and Best Buy — that's $150 off a laptop released only two months ago.

There is a decent step up from M1 to M2 — you jump from 8-core GPU to a 10-core GPU, allowing you to process more complicated tasks at a time without experiencing lag. According to Apple, this can translate up to a 40% faster performance time in programs like Final Cut Pro or 20% faster time with Photoshop.

The upgrades don't start and stop with the more powerful processor, either. The Liquid Retina display has been bumped up to 13.6 inches, while the overall laptop became slimmer, and ditched the Air's tapered design to more resemble a MacBook Pro. Though the changes might sound small, Mashable tech reporter Alex Perry made note of them in his review, writing "I’ve spent the last three and a half years working on a 2016 MacBook Pro with a 13.3-inch display and immediately felt the difference here."

The FaceTime camera also got a much needed upgrade from a 720p camera to a 1080p camera, though you will have to deal with a notch. Other drawbacks include a lackluster refresh rate and a general lack of ports. However, you still get that same 18 hour battery life.

The 512GB M2 MacBook Air is available for $1,349 in starlight, silver, and space gray.


via IFmashable.com
Today's top deals include a best-selling Shark robot vacuum, an M2 MacBook Air, an Apple TV+ subscription, and more

Today's top deals include a best-selling Shark robot vacuum, an M2 MacBook Air, an Apple TV+ subscription, and more

shark robot vacuum, macbook air, and apple tv devices showing apple tv+ all against a purple and pink background

Here are the best deals of the day for Sept. 28:


There's no need to wait until Amazon's Prime Early Access Sale on Oct. 11 to save — the retail behemoth has already dropped deals upon deals for you to take advantage of. From Shark's best-selling vacuums to smart home devices, from SwitchBot to Amazon-branded bundles, there's plenty of sales to shop as early as Sept. 28. Plus, streaming and subscription deals are still going strong with extended free trials and discounted monthly fees.

Whether you're looking to get started on holiday shopping already (good for you) or seeking some retail therapy, today's top deals are a great way to kick things off. Save big on home, tech, and streaming on Sept. 28.

Best home deal

Why we like it

Shark's best-selling robot vacuum, the Shark RV1001AE IQ Self-Emptying Robot Vacuum typically goes for $599.99, but this $300 discount drops it to half price — matching its all-time low price. The robot vacuum maps out your home and can be sent to specific rooms at your beck and call via the SharkClean app. Once it completes its daily duties, it empties itself and contains the debris for up to 45 days at a time, making your life even easier. That's a whole lot more than most robot vacuums at this price point can do.

Best tech deal

Why we like it

If you've been rocking an older MacBook for awhile, you should definitely consider upgrading to the recently-released M2 MacBook Air. It's not perfect and could use an upgrade to the refresh rate and port selection, but overall it's an ideal work-from-home companion. It features rock-solid battery life, a brilliant 13.6-inch display, enough horsepower to handle everyday tasks and then some, and a high-end keyboard you'll enjoy using. Mashable tech reporter Alex Perry calls it "an excellent machine that is a light at the end of a long, winding tunnel for someone who’s been using one of those awful old keyboards for years." At $150 off, this is the biggest discount to date on the 2022 MacBook Air.

Best streaming deal

Why we like it

Through Oct. 31, new and returning subscribers can get three months of Apple TV+ for free. Since a subscription usually goes for $4.99.month, you'll end up saving about $15 with this exclusive deal. It's a rare chance to save on an Apple TV+ subscription without having to purchase a brand-new Apple device. While the streamer might not have as massive of a library as Netflix or Hulu, the content it does offer is definitely top-notch. You can binge-watch Ted Lasso, Bad Sisters, Loot, Prehistoric Planet, The Morning Show, and many others for three full months without paying a cent. Just be sure to cancel before your trial is up if you want to avoid the monthly fee.

More home deals

More tech deals

More streaming and subscription deals


via IFmashable.com
Hertz and BP plan to build a nation-wide EV charging network in the US

Hertz and BP plan to build a nation-wide EV charging network in the US

After recently signing deals to purchase electric vehicles from GM and Polestar, Hertz is turning its attention to the infrastructure needed to support those cars. On Tuesday, the company announced the signing of a memorandum of understanding with energy giant BP (formerly British Petroleum) to build a national charging network across the United States. At this stage, there aren’t a lot of details on the buildout Hertz and BP are considering, but the agreement calls for the oil company’s Pulse subsidiary to manage the potential network.

Hertz currently has EVs available at 500 locations across 38 states. The company says the partnership will allow it to significantly expand its national charging footprint. That’s something Hertz will need to do if it plans to meet its goal of converting at least a quarter of its fleet to electric vehicles by the end of 2024. Even if you don’t end up renting an EV from Hertz anytime soon, you could benefit from the partnership. In addition to serving its customers, the network will be open to the general public – provided, of course, Hertz and BP move forward with their plan.


via engadget.com

YouTuber says Samsung may have a problem with swelling phone batteries

Samsung may not have left its battery troubles completely in the past. YouTuber Mrwhosetheboss (aka Arun Rupesh Maini) and others have noticed that batteries in Samsung phones are swelling up at a disproportionately high rate. While this most often affects older devices where ballooning batteries are more likely, some of them are only a couple of years old — the 2020-era Galaxy Z Fold 2, for instance. It's usually obvious (the phone back pops loose), but it can be subtle enough that you may not realize your battery is in a dangerous state.

Battery swelling isn't a new problem, or unique to Samsung. As lithium batteries age, their increasingly flawed chemical reactions can produce gas that inflates battery cells and increases the risk of a fire. This author has had two non-Samsung phones meet their ends this way. It's more likely to happen if you leave a battery without charging or discharging for a long time, and many companies (such as Apple) recommend that you keep batteries at a roughly 50 percent charge if you won't use a device for extended periods.

The concern is that swelling appears to affect Samsung phones of the past few years more than other brands, and that the power packs are rated to last five years without hazards like this. Tech video creators are uniquely well-suited to track issues like this — Maini and people like him often store dozens or hundreds of phones in identical conditions, although they don't necessarily keep the handsets at appropriate charge levels.

It's not clear just how broad the problem is, or how systemic it might be. We've asked Samsung for comment and will let you know if we hear back. However, it's safe to say the company would rather not deal with more battery woes. The Galaxy Note 7's fire-prone battery led Samsung to conduct a massive recall that (temporarily) tarnished the firm's reputation. With that said, the crisis also prompted a focus on battery safety and served as a warning sign to the phone industry. If nothing else, the swelling reports could educate users and manufacturers.


via engadget.com
The latest iPadOS 16 beta brings Stage Manager to older iPad Pro models

The latest iPadOS 16 beta brings Stage Manager to older iPad Pro models

Probably the biggest change Apple announced with iPadOS 16 earlier this year is Stage Manager, a totally new multitasking system that adds overlapping, resizable windows to the iPad. That feature also works on an external display, the first time that iPads could do anything besides mirror their screen on a monitor. Unfortunately, the feature was limited to iPads with the M1 chip — that includes the 11- and 12.9-inch iPad Pro released in May of 2021 as well as the M1-powered iPad Air which Apple released earlier this year. All other older iPads were left out.

That changes with the latest iPadOS 16 developer beta, which was just released. Now, Apple is making Stage Manager work with a number of older devices: it'll work on the 11-inch iPad Pro (first generation and later) and the 12.9-inch iPad Pro (third generation and later). Specifically, it'll be available on the 2018 and 2020 models that use the A12X and A12Z chips rather than just the M1. However, there is one notable missing feature for the older iPad Pro models — Stage Manager will only work on the iPad's build-in display. You won't be able to extend your display to an external monitor.

Apple also says that developer beta 5 of iPadOS 16. is removing external display support for Stage Manager on M1 iPads, something that has been present since the first iPadOS 16 beta was released a few months ago. It'll be re-introduced in a software update coming later this year. Given that some of the iPad community has been pretty vocal about issues with Stage Manager, particularly when using it with an external display, it makes sense that Apple is taking some extra time to keep working on it. 

Obviously, we'll need to try Stage Manager on an older iPad Pro before we can say how well it works, but the A12X and A12Z chips are still plenty powerful, so the experience should hopefully not be any different than on an M1 iPad. It's a bummer that external monitor support isn't included, but this should still be welcome news to people who bought Apple's most expensive iPads in the last few years.

Apple provided Engadget with the following statement about this update:

We introduced Stage Manager as a whole new way to multitask with overlapping, resizable windows on both the iPad display and a separate external display, with the ability to run up to eight live apps on screen at once. Delivering this multi-display support is only possible with the full power of M1-based iPads. Customers with iPad Pro 3rd and 4th generation have expressed strong interest in being able to experience Stage Manager on their iPads. In response, our teams have worked hard to find a way to deliver a single-screen version for these systems, with support for up to four live apps on the iPad screen at once.

External display support for Stage Manager on M1 iPads will be available in a software update later this year.


via engadget.com
Google Play Store finally makes it easier to find Android TV and Wear OS apps

Google Play Store finally makes it easier to find Android TV and Wear OS apps

The Google Play Store is notorious for making it difficult to find apps optimized for non-phone devices —you've often had to guess and hope for the best. Now, however, it just involves a couple of taps. Google says it recently added Play Store home pages to its Android app with recommendations for Android Automotive, Android TV and Wear OS apps. Visit "other devices" and you can find a health tracker for your Galaxy Watch 5, or a video service for your Chromecast.

New search filters also limit results to those that support non-phone hardware. If you find something you like, you can remotely install it from your handset. Google also noted that it previously revised the Play Store website to improve navigation and features like remote installs.

Google Play Store filters for non-phone devices
Google

The move follows efforts to accommodate tablet users, and could be helpful if you can't (or just don't want to) search for apps on the device where you'll use them. That's particularly helpful for Wear OS users who might have to browse apps on a tiny screen. You might find more apps for your devices and (as Google no doubt hopes) increase your chances of sticking to the Android ecosystem.

It's also difficult to ignore the timing. Google is formally debuting the Pixel Watch at its New York City event on October 6th, and just revamped the 1080p Chromecast. The improved app discovery could help sell these products to customers wondering if their favorite app is available. Not that you'll likely mind if you prefer third-party gear — this might boost Android as a whole.


via engadget.com
The Bose QuietComfort 45 noise-canceling headphones are back at their best price ever

The Bose QuietComfort 45 noise-canceling headphones are back at their best price ever

Man on the subway listening to Bose QuietComfort 45 wireless noise-canceling headphones

Save $80: As of Sept. 27, the Bose QuietComfort 45 wireless noise-canceling headphones are $249 at Amazon. That matches the previous best price we've seen on these over-ear headphones following this 24% discount.


Noise is a problem on any commute. That's why a pair of noise-canceling headphones are your best friend on any airplane or subway, particularly if they're available for a great deal from one of the top names in sound quality.

Enjoy your quiet place anywhere with the Bose QuietComfort 45 headphones now on sale for $249 on Amazon for both the white and black models. This $80 price cut matches the best deal on this model we saw earlier this month. And since these Bose headphones headline our list of the best noise-canceling headphones for flying, today's deal is worth checking out.

With the QuietComfort 45 headphones, Bose steps up their active noise-canceling game for an even more immersive listening experience. When you switch on the Quiet Mode, tiny microphones inside the earcups constantly measure outside noise and cancel it out with opposite signals. You can also switch on Aware Mode so that those same microphones pick up outside noise to keep you aware of your surroundings like on busy streets.

Whether it's music, podcasts, or phone calls, everything sounds perfectly balanced automatically. You can personalize it based on your preferred bass, mid-range, and treble levels. You'll also enjoy up to 24 hours of battery life even with active noise-canceling on, and it only takes 15 minutes of quick charging to get a three-hour charge.


via IFmashable.com