Connect with us

Technology

Apple didn’t just announce a $5000 monitor

Published

on

There wasn’t too much reaction to the $4999 starting price of the monitor itself, or even the $1000 premium for the low-reflectivity upgrade. This is a true reference-grade monitor designed for critical color work by photographers, videographers, and filmmakers. Similar monitors typically cost upwards of $10,000, and Apple compared it to one costing $43,000.

But the Internet has had a lot to say about the optional stand …ok here we go, the audience were so close to booing when they announced the stand was $1000, $1K for a stand? Oh god, this is one first word somebody can spill out, it’s outstanding to put a $1K for a piece of metal to hold the screen, this is insane, even though it’s well knowing that people are ready to offer this price for such a piece of metal just because it’s “Apple”.

Apple itself is known for commanding high prices, but even compared to its own kit, the Pro Stand seems to have created a class of its own in terms of the Cupertino excellence mark-up. It’s not a direct comparison but there’s a swanky iMac stand aimed at regular people on Apple’s online US store. It’s called the Twelve South HiRise Pro. It works with iMacs, iMac Pros and external displays. It’s made from aluminium with an optional walnut finish on the front, adjustable to four different height options and has a “padded leather valet tray” for your phone, glasses, keys and other tchotchkes. It costs $150.

Just over $1,000 will also pay for: an Apple HomePod, an Apple Watch Series 4, an Apple TV 4K and a pair of AirPods 2.

Remember these are Apple products we’re talking about. It doesn’t make sense to compare these ‘consumer’ products to the Mac Pro and Pro Display XDR, but this is an adjustable metal stand versus a speaker, a smartwatch, a streaming box and some in-ear headphones. The Pro Stand has singlehandedly done the impossible and made them all look like bargains. Bravo. We are not worthy.

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

WhatsApp is not safe

Published

on

Developers deny this allegation

In a recent official statement, a representative of the United Nations (UN) said that the popular WhatsApp messenger is not safe software.

”Continue_Reading...”

Due to security concerns, the UN has banned its officials from using the messaging app since June 2019.

What are the roots of this accusation? Recall, the other day there was information that the crown prince of Saudi Arabia, Mohammed bin Salman (Mohammed bin Salman) may be directly related to hacking the smartphone Jeff Bezos, founder of Amazon. Allegedly, an encrypted message was sent from his phone via WhatsApp that contained a malicious file.

 

Each UN official was instructed to refrain from u

sing WhatsApp for official communication since this messenger is not safe. UN spokesperson Farhan Haq

At the same time, WhatsApp developers claim that 1.5 billion people use the best messenger in the industry, including in terms of security.

Each message is secured using end-to-end encryption to prevent messages from being viewed on WhatsApp or unwanted third-party sources.

Our encryption technology, developed in conjunction with Signal, remains the best to date and is highly regarded by security experts. Carl Woog, Director of Communications,WhatsApp.

 

 

World Popular News

https://worldpopularnews.com

show less

Continue Reading

Technology

Four ways Google is using AI to solve problems too complex for humans

Published

on

Google will not develop artificial intelligence for use in weapons, surveillance violating internationally accepted norms or technologies were the risks substantially outweigh the benefits.
These were the principles outlined by Google AI Lead Jeff Dean at an event aimed at highlighting how the tech giant is making good use of its expertise in AI and machine learning – a project the company has dubbed “AI for Social Good”.
Machine learning is the ability of machines to receive data and learn for themselves without being programmed with rules.
Apart from using ML for its own products and research, Google is working with several partners to provide solutions for problem either too vast or complex for humans
“We believe AI can help tackle some of the most difficult social and environmental challenges of our times, and not just in computer science but in areas where you necessarily expect it like healthcare, environmental conservation and agriculture,” Jeff Dean said in the keynote.
HEALTHCARE
The AI has a better strike rate than trained doctors.
The AI has a better strike rate than trained doctors. (Supplied)
Product Manager for Google Health Lily Peng said the company’s AI ventures were helpful in the field of healthcare –  primarily in lung cancer screening and breast cancer metastases detection.
“We believe that technology can have a big impact in medicine, helping democratise access to care, returning attention to patients and helping researchers make scientific discoveries,” she said.
Lung cancer results in over 1.7 million deaths per year and is the sixth most common cause of death globally.
Evidence has shown early detection is the best treatment, however radiologists are often forced to search for minuscule signs of cancer from hundreds of 2D images captured during a single CT scan.
Google’s machine learning model can create a 3D image of the scans and search for subtle malignant tissue in the lungs – it can also factor in information from previous scans.
When using a single CT scan for diagnosis, Google’s model performed better than six radiologists. It detected five percent more cancer cases while reducing false-positive exams by more than 11 percent compared to unassisted radiologists in its research.
In breast cancer metastases detection, Google says its machine learning model can find 95 per cent of cancer lesions in pathology images – pathologists can generally only detect 73 per cent.
 ENVIRONMENTAL CONSERVATION
Humpback whale mother and calf (Getty)
Google is using AI to save humpback whale populations. (Getty)
Humpback whale populations are currently listed as endangered as a result of whaling practices.
To give the at-risk marine species a better chance for survival, Google has partnered with National Oceanic and Atmospheric Administration (NOAA) to create a solution.
The bio-acoustics project used 19 years worth of underwater audio data collected by NOAA to train Google’s neural network to identify the call of a humpback whale.
The program is able to track whales.
The program is able to track whales. (Supplied)
Product Manager at Google AI Julie Cattiau said machine learning is able to distinguish the sound of humpback whales easily from other similar sounds – something humans struggled to do.
“We started by turning the underwater audio data into a visual representation of the sound called a spectrogram, and then showed our algorithm many example spectrograms that were labelled with the correct species name,” Google explained.
“The more examples we can show it, the better our algorithm gets at automatically identifying those sounds.”
The machine learning program gives a better understanding of where humpback whales live and where they travel.
“In the future, we plan to use our classifier to help NOAA better understand humpback whales by identifying changes in breeding location or migration paths, changes in relative abundance,” Google explained.
ACCESSIBILITY
ALS is neuro-degenerative condition that can result in the inability to speak and move.
By collaborating non-profit ALS organisations, Google has been recording the voices of people suffering the condition to optimise AI based algorithms so that mobile phones and computers can transcribe speech of people with impairments.
“The first step of our research effort is to ask volunteers to record voice samples that we can use to improve our speech recognition models. Once we have enough recordings from someone, our team builds a personalised communication system that works specifically for people who recorded their voice,” said Google AI Product Manager Julie Cattiau.
“Our AI algorithms currently aim to accommodate individuals who speak English and have impairments typically associated with ALS, but we believe that our research can be applied to larger groups of people and to different speech impairments.”
In addition to improving speech recognition, Google is also training personalised AI algorithms to detect sounds or gestures which generate spoken commands to Google Home.
The tech giant showcased the potential in a video (see above) with an ALS patient using non-speech sounds to trigger smart home devices such as lights and facial gestures to cheer during a sports game.
FLOOD FORECASTING
The machine learning program can accurately predict the flood zone.
The machine learning program can accurately predict the flood zone when compared to previous models. (Supplied)
Software engineering manager at Google AI Sella Nevo has been working on a machine learning project that will better predict areas that will hit by devastating floods.
“The reason we do this work is to be able to warn people and protect them… We’re working to give people even more information and alert them early,” he said.
Google Maps users will receive an alert.
Google Maps users will receive an alert. (Supplied)
Mr Nevo said flood forecasting is currently based on low-resolution elevation maps that are nearly two decades old, making it virtually impossible to accurately predict affected areas.
However, by using machine learning models combined with satellite imagery and data from government agencies, researchers have been able to develop the Flood Forecasting Initiative.
Google launched a pilot program in India last year as the country accounts for nearly 20 per cent of the flood-related fatalities in the world- nearly 107,487 were recorded as a result of heavy rains and floods between 1953 and 2017.
The pilot program ran hundreds of thousands of simulations on its machine learning models ahead of a flooding natural disaster in Patna, India last year.
It predicted the regions affected by the flood with an accuracy of over 90 per cent, with the tech giant alerting those at risk using notifications on smartphones.
SURELY IT CAN’T ALL BE GOOD

At the core of the concept is Google’s TensorFlow – an end-to-end open source platform for machine learning.

Google AI Lead Jeff Dean said projects on the the company’s cloud services have restrictions, but admits the tech giant reluctantly has to accept those taking the open-source technology and using it for dubious purposes.

One possible example would be the whale tracking technology being used by illegal whalers.
“One of the things we decided when we open-sourced TansorFlow was to make it very flexible. Take it and do what you want with it,” he explained
“I think there is an issue that one could use it to build a higher level machinery do particular things that we might find not so great.”
Continue Reading

Technology

Elon Musk is making implants to link the brain with a smartphone

Published

on

Elon Musk wants to insert Bluetooth-enabled implants into your brain, claiming the devices could enable telepathy and repair motor function in people with injuries.
Speaking on Tuesday, the CEO of Tesla and SpaceX said his Neuralink devices will consist of a tiny chip connected to 1,000 wires measuring one-tenth the width of a human hair.
The chip features a USB-C port, the same adaptor used by Apple’s Macbooks, and connects via Bluebooth to a small computer worn over the ear and to a smartphone, Musk said.
Elon Musk has a new hi-tech idea. (AP)
“If you’re going to stick something in a brain, you want it not to be large,” Musk said, playing up the device’s diminutive size.
Neuralink, a startup founded by Musk, says the devices can be used by those seeking a memory boost or by stroke victims, cancer patients, quadriplegics or others with congenital defects.
The company says up to 10 units can be placed in a patient’s brain. The chips will connect to an iPhone app that the user can control.
The devices will be installed by a robot built by the startup. Musk said the robot, when operated by a surgeon, will drill 2 millimetre holes in a person’s skull.
The chip part of the device will plug the hole in the patient’s skull.”The interface to the chip is wireless, so you have no wires poking out of your head.
That’s very important,” Musk added.Trials could start before the end of 2020, Musk said, likening the procedure to Lasik eye correction surgery, which requires local anaesthetic.
Musk has said this latest project is an attempt to use artificial intelligence (AI) to have a positive affect on humanity. He has previously tried to draw attention to AI’s potential to harm humans.
Chips will be used for merging humans with AI.
Chips will be used for merging humans with AI. (Supplied)
He has invested some $100 million in San Francisco-based Neuralink, according to the New York Times.
Musk’s plan to develop human computer implants comes on the heels of similar efforts by Google and Facebook.
But critics aren’t so sure customers should trust tech companies with data ported directly from the brain.
“The idea of entrusting big enterprise with our brain data should create a certain level discomfort for society,” said Daniel Newman, principal analyst at Futurum Research and co-author of the book Human/Machine.
“There is no evidence that we should trust or be comfortable with moving in this direction,” he added.
While the technology could help those with some type of brain injury or trauma, “Gathering data from raw brain activity could put people in great risk, and could be used to influence, manipulate and exploit them,” Frederike Kaltheuner of Privacy International told CNN Business.
“Who has access to this data? Is this data shared with third parties? People need to be in full control over their data.”
Continue Reading

Trending

Copyright © 2020 WORLD POPULAR NEWS