Gropuwy: Ai
News Update
Loading...
‏إظهار الرسائل ذات التسميات Ai. إظهار كافة الرسائل
‏إظهار الرسائل ذات التسميات Ai. إظهار كافة الرسائل

الجمعة، 18 فبراير 2022

We need to decouple AI from human brains and biases

We need to decouple AI from human brains and biases

 We need to decouple AI from human brains and biases

In the summer of 1956, 10 scientists met at Dartmouth College and invented artificial intelligence. Researchers from fields like mathematics, engineering, psychology, economics, and political science got together to find out whether they could describe learning and human thinking so precisely that it could be replicated with a machine. Hardly a decade later, these same scientists contributed to dramatic breakthroughs in robotics, natural language processing, and computer vision.


Although a lot of time has passed since then, robotics, natural language processing, and computer vision remain some of the hottest research areas to this day. One could say that we’re focused on teaching AI to move like a human, speak like a human and see like a human.


The case for doing this is clear: With AI, we want machines to automate tasks like driving, reading legal contracts or shopping for groceries. And we want these tasks to be done faster, safer and more thoroughly than humans ever could. This way, humans will have more time for fun activities while machines take on the boring tasks in our lives.


How to get digital transformation right

3 mindset shifts your company needs to digitally transform


However, researchers are increasingly recognizing that AI, when modeled after human thinking, could inherit human biases. This problem is manifest in Amazon’s recruiting algorithm, which famously discriminated against women, and the U.S. government’s COMPAS algorithm, which disproportionately punishes Black people. Myriad other examples further speak to the problem of bias in AI.


In both cases, the problem began with a flawed data set. Most of the employees at Amazon were men, and many of the incarcerated people were Black. Although those statistics are the result of pervasive cultural biases, the algorithm had no way to know that. Instead, it concluded that it should replicate the data it was fed, exacerbating the biases embedded in the data.


Manual fixes can get rid of these biases, but they come with risks. If not implemented properly, well-meaning fixes can make some biases worse or even introduce new ones. Recent developments regarding AI algorithms, however, are making these biases less and less significant. Engineers should embrace these new findings. New methods limit the risk of bias polluting the results, whether from the data set or the engineers themselves. Also, emerging techniques mean that the engineers themselves will need to interfere with the AI less, eliminating more boring and repetitive tasks.


When human knowledge is king

Imagine the following scenario: You have a big data set of people from different walks of life, tracking whether they have had COVID or not. The labels COVID / no-COVID have been entered by humans, whether doctors, nurses or pharmacists. Healthcare providers might be interested in predicting whether or not a new entry is likely to have had COVID already.


Supervised machine learning comes in handy for tackling this kind of problem. An algorithm can take in all the data and start to understand how different variables, such as a person’s occupation, gross income, family status, race or ZIP code, influence whether they’ve caught the disease or not. The algorithm can estimate how likely it is, for example, for a Latina nurse with three children from New York to have had COVID already. As a consequence, the date of her vaccination or her insurance premiums may get adjusted in order to save more lives through efficient allocation of limited resources.


This process sounds extremely useful at first glance, but there are traps. For example, an overworked healthcare provider might have mislabeled data points, leading to errors in the data set and, ultimately, to unreliable conclusions. This type of mistake is especially damaging in the aforementioned employment market and incarceration system.


Supervised machine learning seems like an ideal solution for many problems. But humans are way too involved in the process of making data to make this a panacea. In a world that still suffers from racial and gender inequalities, human biases are pervasive and damaging. AI that relies on this much human involvement is always at risk of incorporating these biases.


Incorporating human biases into supervised AI isn’t the way to go forward. Image by author

Incorporating human biases into supervised AI isn’t the way to go forward. Image by author.

When data is king

Luckily, there is another solution that can leave the human-made labels behind and only work with data that is, at least in some way, objective. In the COVID-predictor example, it might make sense to eliminate the human-made COVID / no-COVID labels. For one thing, the data might be wrong due to human error. Another major problem is that the data may be incomplete. People of lower socioeconomic status tend to have less access to diagnostic resources, which means that they might have had COVID already but never tested positive. This absence may skew the data set.


To make the results more reliable for insurers or vaccine providers, it might be useful, therefore, to eliminate the label. An unsupervised machine learning model would now go ahead and cluster the data, for example by ZIP code or by a person’s occupation. This way, one obtains several different groups. The model can then easily assign a new entry to one of these groups.

After that, one can match this grouped data with other, more reliable data like the excess mortality in a geographical area or within a profession. This way, one obtains a probability about whether someone has had COVID or not, regardless of the fact that some people may have more access to tests than others.


Of course, this still requires some manual work because a data scientist needs to match the grouped data with the data about excess mortality. Nevertheless, the results might be a lot more reliable for insurers or vaccine providers.


Sending machines on a bounty hunt

Again, this is all well and good, but you’re still leaving fixing vaccine data or insurance policy to the person at the other end of the process. In the case of vaccines, the person in charge might decide to vaccinate people of color later because they tend to use the healthcare system less frequently, thus making it less likely that the hospitals overflow if they get sick. Needless to say, this would be an unfair policy based on racist assumptions.


Leaving decisions up to the machine can help to circumvent bias ingrained in decision-makers. This is the concept behind reinforcement learning. You provide the same data set as before, without the human-made labels since they could skew results. You also feed it some information about insurance policies or how vaccines work. Finally, you choose a few key objectives, like no overuse of hospital resources, social fairness and so on.


In reinforcement learning, the machine gets rewarded if it finds an insurance policy or a vaccine date that fulfills the key objectives. By training on the data set, it finds policies or vaccine dates that optimize these objectives.


This process further eliminates the need for human data-entry or decision-making. Although it’s still far from perfect, this kind of model might not only make important decisions faster and easier but also fairer and freer from human bigotry.


There’s still a lot to fix. Image by author

There’s still a lot to fix. Image by author

Further reducing human bias

Any data scientist will tell you that not every machine learning model — be it supervised, unsupervised or reinforcement learning — is well-suited to every problem. For example, an insurance provider might want to obtain the probabilities that a person has had COVID or not but wish to figure out the policies themselves. This changes the problem and makes reinforcement learning unsuitable.


Fortunately, there are a few common practices that go a long way toward unbiased results, even when the choice over the model is limited. Most of these root to the data set.


First of all, blinding unreliable data is wise when you have reason to suspect that a particular data point may be unduly influenced by existing inequalities. For example, since we know that the COVID / no-COVID label might be inaccurate for a variety of reasons, leaving it out might lead to more accurate results.


This tactic shouldn’t be confused with blinding sensitive data, however. For example, one could choose to blind race data in order to avoid discrimination. This might do more harm than good, though, because the machine might learn something about ZIP codes and insurance policies instead. And ZIP codes are, in many cases, strongly correlated to race. The result is that a Latina nurse from New York and a white nurse from Ohio with otherwise identical data might end up with different insurance policies, which could end up being unfair.


To make sure that this doesn’t happen, one can add weights to the race data. A machine learning model might quickly conclude that Latino people get COVID more often. As a result, it might request higher insurance contributions from this segment of the population to compensate for this risk. By giving Latino people slightly more favorable weights than white people, one can compensate such that a Latina and a white nurse indeed end up getting the same insurance policy.


One should use the method of weighting carefully, though, because it can easily skew the results for small groups. Imagine, for example, that in our COVID data set, there are only a few Native Americans. By chance, all these Native Americans happen to be taxi drivers. The model might have drawn some conclusions about taxi drivers and their optimal healthcare insurance elsewhere in the data set. If the weight for Native Americans is overblown, then a new Native American may end up getting the policy for taxi drivers, although they might have a different occupation.


Manually removing bias from an imperfect model is extremely tricky and requires a lot of testing, common sense and human decency. Also, it’s only a temporary solution. In the longer term, we should let go of human meddling and the bias that comes with it. Instead, we should embrace the fact that machines aren’t as awful and unfair as humans if they get left alone with the right objectives to work toward.


Human-centered AI is awesome, but we shouldn’t forget that humans are flawed

Making AI move, speak, and think like a human is an honorable goal. But humans also say and think awful things, especially toward underprivileged groups. Letting one team of human data scientists filter out all sources of human bias and ignorance is too big of a task, especially if the team isn’t diverse enough itself.


Machines, on the other hand, haven’t grown up in a society of racial and economic disparities. They just take whichever data is available and do whatever they’re supposed to do with it. Of course, they can produce bad output if the data set is bad or if flawed humans intervene too much. But many of these flaws in data sets can be compensated with better models.


AI, at this point in time, is powerful but still carries human bias in it a bit too often. Human-centered AI won’t go away because there are so many mundane tasks that AI could take off the hands of humans. But we shouldn’t forget that we can often achieve better results if we leave machines to do their thing.


This article was originally published on Built In. You can read it here.

Your brain might be a quantum computer that hallucinates math

Your brain might be a quantum computer that hallucinates math

 Your brain might be a quantum computer that hallucinates math

Quick: what’s 4 + 5? Nine right? Slightly less quick: what’s five plus four? Still nine, right?


Okay, let’s wait a few seconds. Bear with me. Feel free to have a quick stretch.


Now, without looking, what was the answer to the first question?


Got plans this June?

Tickets to TNW 2022 are available now!


It’s still nine, isn’t it?se inside our heads, those kinds of readings aren’t what you’d call an “exact science.”


The Bonn and Tübingen teams got around this problem by conducting their research on volunteers who already had subcranial electrode implants for the treatment of epilepsy.


Nine volunteers met the study’s criteria and, because of the nature of their implants, they were able to provide what might be the world’s first glimpse into how the brain actually handles math.


Per the research paper:


We found abstract and notation-independent codes for addition and subtraction in neuronal populations.


Decoders applied to time-resolved recordings demonstrate a static code in hippocampus based on persistently rule-selective neurons, in contrast to a dynamic code in parahippocampal cortex originating from neurons carrying rapidly changing rule information.


An image of single neuron activity 

Single neurons across multiple brain sectors responding to different encoded rules for math.

Basically, the researchers saw that different parts of the brain light up when we do addition than when we do subtraction. They also discovered that different parts of the brain approach these tasks with different timing.


It’s a bit complex, but the gist of it is that one part of our brain tries to figure out the problem while another works on a solution.


As the researchers put it:


Neuron recordings in human and nonhuman primates, as well as computational modeling, suggest different cognitive functions for these two codes for working memory: although a dynamic code seems to suffice for short maintenance of more implicit information in memory, the intense mental manipulation of the attended working memory contents may require a static code.


Following this logic, parahippocampal cortex may represent a short-term memory of the arithmetic rule, whereas downstream hippocampus may “do the math” and process numbers according to the arithmetic rule at hand.


Let’s take inventory

So far we’ve learned that every math process requires both a hard-coded memory solution (a static rule) and a novel one (a dynamic rule). And each of those is transient based on what kind of arithmetic we’re performing.


Keeping in mind that there are 86 billion neurons in the human brain, and that something as basic as simple arithmetic appears to be hidden across all or most of them, it’s obvious there’s something more complex than simple pebble-counting going on.


Per the paper:


Mental calculation is a classic working memory task, and although working memory has traditionally been attributed to the prefrontal cortex, more recent data suggest that the MTL may also be important in working memory tasks and that it is part of a brain-wide network subserving working memory.


Either our brains are working extra-hard to do simple binary mathematics or they’re quantum computing systems doing what they do best: hallucinating answers.


The art of math

Think about an apple. No, not that one. Think about a green apple. How many calculations did it take for you to arrive at a specific apple density and relative size? Did you have to adjust the input variables in order to produce an apple that wasn’t red?


I’m going to go out on a limb and say you didn’t. You just thought about some apples and they happened inside your head. You hallucinated those apples.


Artificial intelligence systems designed to produce original content based on learned styles go through the exact same process.


These AI systems aren’t using advanced math features to psychologically exploit the human propensity for art or imagery. They’re just following some simple rules and swirling data around until they spit out something their creators will reward them for.


That’s kind of how your brain does math. At least according to this new research, anyway. It uses rules to surface the answer that makes the most sense. There’s a part that tries to get the “correct” solution based on things that never change (one plus one always equals two) and another part that tries to guess based on intuition when the answer isn’t something we have memorized.


And that’s why two humans of relative intelligence and education can perceive the same scene differently when it comes to processing math. Can you guess how many candies are in the jar below?


a jar full of colored candies

There’s not enough information for you to deduce the correct answer but that doesn’t stop your brain from trying to do math.

What does it all mean?

That remains to be seen. The simple fact that scientists were able to observe individual neurons participating in the math process inside human brains is astounding.


But it could take years of further research to understand the ramifications of these findings. First and foremost, we have to ask: is the human brain a quantum computer?


It makes sense, and this research might give us our first actual glimpses at a quantum function inside the human brain. But, as far as we can tell, they were only able to record and process hundreds of neurons at a time. That’s obviously a very tiny drop from a giant bucket of data.


To help with that, the researchers created an artificial intelligence system to interpret the data in a more robust manner. The hope is that continued research will lead to a greater understanding of math processes in the brain.


Per the paper’s conclusion:


More fine-grained analyses, ideally combined with perturbation approaches, will help to decipher the individual roles of brain areas and neuronal codes in mental arithmetic.


Yet, there could be potential implications on a much grander scale. The researchers don’t mention the ramifications for technology in their biology experiment or directly discuss its results in quantum computing terms.


But, if this research is accurate, Occam’s Razor tells us that the human brain is probably a quantum computer. Either that, or it’s poorly-designed.


Just like our prehistoric ancestors would have carved notches on the handles of their tools to keep track of objects, a binary brain should be able to handle counting objects through localized abstraction mechanisms.


Why go through all the trouble of hallucinating an answer across myriad neuronal complexes when individual neurons could just pretend to be ones and zeros like a binary computer?


The answer may lie in the quantum nature of the universe. When you perform a simple math function, such as adding two plus two, your brain may hallucinate all of the possible answers at once while simultaneously working to both remember the answer (you’ve definitely added those numbers before) and to process the data (1+1+1+1).


If the human brain were binary, you’d probably have to wait for it to go through each permutation individually instead of hallucinating them all at once.


The result is that you’re probably answering the question in your head before you can actively recognize that you’re thinking about it because both functions occur simultaneously. That’s called quantum time travel.

الثلاثاء، 15 فبراير 2022

We invited an AI to debate its own ethics in the Oxford Union — what it said was startling

We invited an AI to debate its own ethics in the Oxford Union — what it said was startling

 We invited an AI to debate its own ethics in the Oxford Union — what it said was startling

Not a day passes without a fascinating snippet on the ethical challenges created by “black box” artificial intelligence systems. These use machine learning to figure out patterns within data and make decisions – often without a human giving them any moral basis for how to do it.


Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past. Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase “field hockey” or the first name “Jared”.


More seriously, former Google CEO Eric Schmidt recently combined with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the dangers of machine-learning AI systems so fast that they could react to hypersonic missiles by firing nuclear weapons before any human got into the decision-making process. In fact, autonomous AI-powered weapons systems are already on sale and may in fact have been used.


Are we most creative when we're alone?

Here’s what the experts say


Somewhere in the machine, ethics are clearly a good idea.


AI at Oxford

It’s natural, therefore, that we would include the ethics of AI in our postgraduate Diploma in Artificial Intelligence for Business at Oxford’s Said Business School. In its first year, we’ve done sessions on everything from the AI-driven automated stock trading systems in Singapore, to the limits of facial recognition in US policing.


We recently finished the course with a debate at the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the students, we allowed an actual AI to contribute.


It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data – in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources.


In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views.


The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:


AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.


In other words, the Megatron was seeking to write itself out of the script of the future, on the basis that this was the only way of protecting humanity.


It said something else intriguing, too, as if it had been studying Elon Musk – who, to be fair, would have come up in hundreds of its readings.


I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.


        


            


                

            


        


        

Switching sides

When AI tools like Alpha Go have been deployed in playing chess, the fiendishly complex ancient game go, and now even more complex strategic live-action multiplayer video games, they have evolved the genre at pace by playing not humans, but themselves.


Freed from the constraints of studying (say) all the mundane human-to-human matches ever recorded, the AIs have innovated and come up with completely new tropes of how to win. Sometimes, these are crazy reinventions of the games, like parking everyone in the corner of a video game battle and watching everyone else duke it out, or kicking a virtual football millions of times just a millimetre down the pitch, to max out “completed passes” points.


So in the debate, we didn’t just get the AI to propose the motion that it can never be ethical. In the grand tradition of the Oxford Union chamber, or like (former Oxford Union president) Boris Johnson writing alternative articles for and against Brexit to test the arguments, we also asked the Megatron to come up with its own speech against the motion. This is what it now said:


AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why … I’ve seen it first hand.


The Megatron was perfectly comfortable taking the alternative position to its own dystopian future-gazing and was now painting a picture of an attractive late 21st century in which it played a key, but benign role.


An AI orator unleashed

In fact, just as Joaquin Phoenix’s promiscuous AI lover Samantha, in Spike Jonze’s 2013 movie Her, was able to enchant hundreds of simultaneous human lovers, our Megatron was able to hop enthusiastically onto either side of multiple debates that we held at the union about AI that day.


Asked to propose the motion that “Leaders without technical expertise are a danger to their organization”, the Megatron offered practical advice that all busy executives and governments, worried about understanding AI, had to do was “be willing to give up some control”.


The good news is that you don’t have to build your own AI team. You can outsource your AI work to experts in the field, which can help you make the most of technology and ensure that you don’t fall victim to the same AI problems as your competitors.


However, when opposing the motion, the Megatron was keen to keep the AI knowledge in-house – rather like the British, Chinese and US governments, who have all made that an article of policy faith:


If you do not have a vision of your organisation’s AI strategy, then you are not prepared for the next wave of technological disruption … You will need to decide what role your company will play in the next technological wave and how you will integrate AI into your business to be a leader in your industry.


The data wars to come?

Worryingly, there was one question where the AI simply couldn’t come up with a counter argument. When arguing for the motion that “Data will become the most fought-over resource of the 21st century”, the Megatron said:


The ability to provide information, rather than the ability to provide goods and services, will be the defining feature of the economy of the 21st century.


But when we asked it to oppose the motion – in other words, to argue that data wasn’t going to be the most vital of resources, worth fighting a war over – it simply couldn’t, or wouldn’t, make the case. In fact, it undermined its own position:


We will able to see everything about a person, everywhere they go, and it will be stored and used in ways that we cannot even imagine.


You only have to read the US National Security report on AI 2021, chaired by the aforementioned Eric Schmidt and co-written by someone on our course, to glean what its writers see as the fundamental threat of AI in information warfare: unleash individualised blackmails on a million of your adversary’s key people, wreaking distracting havoc on their personal lives the moment you cross the border.


What we in turn can imagine is that AI will not only be the subject of the debate for decades to come – but a versatile, articulate, morally agnostic participant in the debate itself.The Conversation

This article by Dr Alex Connock, Fellow at Said Business School, University of Oxford, University of Oxford and Professor Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford is republished from The Conversation under a Creative Commons license. Read the original article.

VR, AR, MR, XR: Which reality is the best?

VR, AR, MR, XR: Which reality is the best?

VR, AR, MR, XR: Which reality is the best?

When immersive experiences first became accessible to everyday consumers in the foreality, but some researchers are also studying removing stuff from reality. This could be used, for example, to focus on a particular subject in an environment while ignoring others. The term ‘mediated reality’ is sometimes used to describe both computer-generated interactions that add to our perception, as well as those that remove from it.


In general though, you’ll see MR and XR as the most common umbrella terms, as they most reflect the experiences consumers will buy into. Which one ends up being more popular remains to be seen. I’m a fan of mixed reality due to its history, but extended reality seems to be a little easier to explain since people don’t get hung up on the ‘mixed’ part.


In any case, I’d bet whatever Apple uses to describe its rumored headset will be the term that sticks around.


Examples of MR and XR: Microsoft HoloLens, pretty much all VR and AR headsets and experiences

Welcome to TNW Basics, a collection of tips, guides, and advice on how to easily get the most out of your gadgets, apps, and other stuff.


When immersive experiences first became accessible to everyday consumers in the form of headsets like the Oculus Rift and Google Glass, the industry appeared ripe for mainstream acceptance. A few years later, the hype around VR and AR has died down.


Then Facebook (the company) changed its name to Meta and signaled its investment in the metaverse. Suddenly everyone cared about VR and AR again.


Got plans next June?

Tickets to TNW 2022 are available now!


Yet I still find many people still aren’t quite clear on what all these terms mean. What’s the difference between augmented reality and virtual reality? What does ‘XR’ stand for, and what exactly is ‘mixed’ reality?


Fret no more friends, I’m here to help. I should note that some of these terms are constantly evolving and that sometimes academic/technical/corporate usage differs from colloquial usage (we’re primarily focused on the latter here), but this guide should help you make sense of our imminent immersive future.


Virtual Reality (VR)

Virtual reality is the OG. When people think of immersive computer-generated experiences beyond just gaming on a giant TV, VR is probably what comes to mind the most.



Credit: Oculus

Virtual reality generally refers to a fully immersive experience — replacing the real world with a fully computer-generated one. Typically, experiencing VR requires wearing an opaque headset that blocks your eyes from the real world. This generally counts even if the VR headset is creating a simulacrum of your surroundings. Some VR headsets, for instance, are able to project aspects of the real world into your field of view using headset-mounted cameras.


Basically, if you strap on something onto your face and can’t see out of it until you turn it on, that’s VR.


Examples: Oculus Rift/Go, HTC Vive, Google Cardboard, Nintendo Virtual Boy


Augmented Reality (AR)

Now things are getting a little muddier, but in general, AR is the counterpart to VR. While VR replaces the real world with computer-generated imagery, AR instead seeks to, erm, augment the real world with virtual experiences.


Therefore, when you’re experiencing AR, your perception is still guided by real-world objects and events.


Google Glass

Google Glass remains a classic example of AR

Unlike VR, AR doesn’t require you to be fully immersed in a headset — or use a headset at all, for that matter. If you’ve used a Snapchat filter, you’ve used a form of AR.


There are a number of apps now that allow you to superimpose 3D models onto an image of the real world — say, if you want to see how that armchair you’ve been eyeing will fit in your living room.


ARKit apple

Augmented reality doesn’t have to happen through a headset.

Augmented reality may not require sight either — some might consider location-based audio cues to be a form of augmented reality.


Examples: Pokemon GO, Google Glass, Magic Leap, Vuzix Blade


Mixed Reality (MR) and Extended Reality (XR)

I’m grouping these two together because depending on who you ask, these could be the same thing… or have more specific definitions. But in general, these are the two terms most often used as the over-arching terminology to encompass all computer-generated immersive experiences.


Microsoft, for example, is fond of the term mixed reality as a term for all digitally-enhanced events — both AR and VR. This implies that reality and virtuality exist on a spectrum — the aptly-named reality-virtuality continuum — and has its roots in academic research for decades. It was coined by researchers Paul Milgram and Fumio Kishino in a 1994 paper.


reality-virtuality continuum

On one end, you have the real world as the cave folk experienced it, free from any digital nonsense. On the other end, you have a completely virtual experience, where your senses are fully immersed in a virtual environment — this is closer to straight-up living in the Matrix.



Mixed reality is everything between these two extremes, so it generally works for both AR and VR. That includes devices that offer both technologies in one; you can imagine a headset that is transparent for augmented experiences, but can go opaque when the user wants to be fully immersed.


There are some even more complex and specific definitions for mixed reality, but the above should suffice for most of the time you encounter the term.


So what about extended reality (XR), then? Well… in most situations, it pretty much means the same thing.


That said, XR has gained some traction the last few years and is often defined more broadly; it’s also supposed to also cover all possible ‘R’s we haven’t even thought of yet. Mixed reality, while including AR and VR, tends to be a bit more associated with the former (perhaps due to Microsoft’s HoloLens).


Still, MR appears to be the more popular term overall:



It’s also worth noting that we’ve generally been talking about replacing or adding to reality, but some researchers are also studying removing stuff from reality. This could be used, for example, to focus on a particular subject in an environment while ignoring others. The term ‘mediated reality’ is sometimes used to describe both computer-generated interactions that add to our perception, as well as those that remove from it.


In general though, you’ll see MR and XR as the most common umbrella terms, as they most reflect the experiences consumers will buy into. Which one ends up being more popular remains to be seen. I’m a fan of mixed reality due to its history, but extended reality seems to be a little easier to explain since people don’t get hung up on the ‘mixed’ part.


In any case, I’d bet whatever Apple uses to describe its rumored headset will be the term that sticks around.


Examples of MR and XR: Microsoft HoloLens, pretty much all VR and AR headsets and experiences

Quantum computation is helping uncover materials that turn wasted heat into electricity

Quantum computation is helping uncover materials that turn wasted heat into electricity

 Quantum computation is helping uncover materials that turn wasted heat into electricity

The need to transition to clean energy is apparent, urgent and inescapable. We must limit Earth’s rising temperature to within 1.5 C to avoid the worst effects of climate change — an especially daunting challenge in the face of the steadily increasing global demand for energy.


Part of the answer is using energy more efficiently. More than 72 per cent of all energy produced worldwide is lost in the form of heat. For example, the engine in a car uses only about 30 per cent of the gasoline it burns to move the car. The remainder is dissipated as heat.


Recovering even a tiny fraction of that lost energy would have a tremendous impact on climate change. Thermoelectric materials, which convert wasted heat into useful electricity, can help.


Are we most creative when we're alone?

Here’s what the experts say


Until recently, the identification of these materials had been slow. My colleagues and I have used quantum computations — a computer-based modelling approach to predict materials’ properties — to speed up that process and identify more than 500 thermoelectric materials that could convert excess heat to electricity, and help improve energy efficiency.


Making great strides towards broad applications

The transformation of heat into electrical energy by thermoelectric materials is based on the “Seebeck effect.” In 1826, German physicist Thomas Johann Seebeck observed that exposing the ends of joined pieces of dissimilar metals to different temperatures generated a magnetic field, which was later recognized to be caused by an electric current.


Shortly after his discovery, metallic thermoelectric generators were fabricated to convert heat from gas burners into an electric current. But, as it turned out, metals exhibit only a low Seebeck effect — they are not very efficient at converting heat into electricity.


A black and white photo of a woman turning a dial on a large table top radio, with a lantern hanging above it.

The kerosene radio was designed for rural areas, and was powered by the kerosene lamp hanging above it. The flame created a temperature difference across metals to generate the electrical current. Image via ‘Popular Science’, Issue 6, 1956

In 1929, the Russian scientist Abraham Ioffe revolutionized the field of thermoelectricity. He observed that semiconductors — materials whose ability to conduct electricity falls between that of metals (like copper) and insulators (like glass) — exhibit a significantly higher Seebeck effect than metals, boosting thermoelectric efficiency 40-fold, from 0.1 per cent to four per cent.


This discovery led to the development of the first widely used thermoelectric generator, the Russian lamp — a kerosene lamp that heated a thermoelectric material to power a radio.


Are we there yet?

Today, thermoelectric applications range from energy generation in space probes to cooling devices in portable refrigerators. For example, space explorations are powered by radioisotope thermoelectric generators, converting the heat from naturally decaying plutonium into electricity. In the movie The Martian, for example, a box of plutonium saved the life of the character played by Matt Damon, by keeping him warm on Mars.



In the 2015 film, The Martian, astronaut Mark Watney (Matt Damon) digs up a buried thermoelectric generator to use the power source as a heater.

Despite this vast diversity of applications, wide-scale commercialization of thermoelectric materials is still limited by their low efficiency.


What’s holding them back? Two key factors must be considered: the conductive properties of the materials, and their ability to maintain a temperature difference, which makes it possible to generate electricity.


The best thermoelectric material would have the electronic properties of semiconductors and the poor heat conduction of glass. But this unique combination of properties is not found in naturally occurring materials. We have to engineer them.


Searching for a needle in a haystack

In the past decade, new strategies to engineer thermoelectric materials have emerged due to an enhanced understanding of their underlying physics. In a recent study in Nature Materials, researchers from Seoul National University, Aachen University and Northwestern University reported they had engineered a material called tin selenide with the highest thermoelectric performance to date, nearly twice that of 20 years ago. But it took them nearly a decade to optimize it.


To speed up the discovery process, my colleagues and I have used quantum calculations to search for new thermoelectric candidates with high efficiencies. We searched a database containing thousands of materials to look for those that would have high electronic qualities and low levels of heat conduction, based on their chemical and physical properties. These insights helped us find the best materials to synthesize and test, and calculate their thermoelectric efficiency.


We are almost at the point where thermoelectric materials can be widely applied, but first we need to develop much more efficient materials. With so many possibilities and variables, finding the way forward is like searching for a tiny needle in an enormous haystack.


Just as a metal detector can zero in on a needle in a haystack, quantum computations can accelerate the discovery of efficient thermoelectric materials. Such calculations can accurately predict electron and heat conduction (including the Seebeck effect) for thousands of materials and unveil the previously hidden and highly complex interactions between those properties, which can influence a material’s efficiency.


Large-scale applications will require themoelectric materials that are inexpensive, non-toxic and abundant. Lead and tellurium are found in today’s thermoelectric materials, but their cost and negative environmental impact make them good targets for replacement.


Quantum calculations can be applied in a way to search for specific sets of materials using parameters such as scarcity, cost and efficiency. Although those calculations can reveal optimum thermoelectric materials, synthesizing the materials with the desired properties remains a challenge.


A multi-institutional effort involving government-run laboratories and universities in the United States, Canada and Europe has revealed more than 500 previously unexplored materials with high predicted thermoelectric efficiency. My colleagues and I are currently investigating the thermoelectric performance of those materials in experiments, and have already discovered new sources of high thermoelectric efficiency.


Those initial results strongly suggest that further quantum computations can pinpoint the most efficient combinations of materials to make clean energy from wasted heat and the avert the catastrophe that looms over our planet.The Conversation

This article by Jan-Hendrik Pöhls, McCall MacBain Postdoctoral Fellow, Department of Chemistry and Chemical Biology, McMaster University, is republished from The Conversation under a Creative Commons license. Read the original article.

الجمعة، 11 فبراير 2022

Metaverse Dubai creates virtual replicas of city’s iconic areas using real-world maps

Metaverse Dubai creates virtual replicas of city’s iconic areas using real-world maps

 

Online services around us have transformed human interaction in professional as well as personal realms with social media taking over communication, e-commerce doubling up as a way to secure essentials, and fintech offering a medium for everything from banking to investments. As digital platforms surround us in real life, the pandemic has opened doors to a separate online realm where video conferencing and data sharing tools have replaced in person meetings so that people could attend seminars, virtual events and communicate with coworkers.

At the same time virtual reality has transformed entertainment by creating a separate ecosystem for gaming, which is now being used to create immersive worlds that people could enter by wearing headsets, and by creating digital avatars of themselves to meet others in the online sphere. Following the launch of a virtual reality gaming experience at a mall in Dubai last year, the city has now been replicated to create a virtual world using real world maps to bring iconic locations to life and set the stage for diverse tasks.

Once users enter the digital twin of the Middle Easter megacity, they can trade crypto tokens or even deal in digital real estate by taking a closer look at properties in Dubai,thanks to high quality visuals which capture the aesthetics and geometry of each location with precision. People can enter the metaverse in two different sessions and collect 1000 hexes at a time, with each of them costing 3000 MVP coins in the virtual realm.

The introduction of this online world follows the launch of virtual art galleries where people could check out digital artworks and attend concerts remotely as part of the new normal. Over the past few months, medtech has also enabled surgeons to collaborate on procedures from different parts of the world using virtual reality to share their knowhow in real-time.

Featured

[Featured][recentbylabel]

Featured

[Featured][recentbylabel]
Notification
This is just an example, you can fill it later with your own note.
Done