
__________
…[edit: so deceptively put forward].
Very important to understand this hidden side of history. It is enormous.
And its enormity shows how depraved the murderers were and still are, and how ruthlessly they have concealed the truth. ABN
Do your best. Speak the truth.
Teaching AI to see faces like humans reveals what makes expert eyes so effective, new research shows.
What is it that makes a super recogniser – someone with extraordinary face recognition abilities – better at remembering faces than the rest of us?
According to new research carried out by cognitive scientists at UNSW Sydney, it’s not how much of a face they can take in – it comes down to the quality of the information their eyes focus on.
“Super-recognisers don’t just look harder, they look smarter. They choose the most useful parts of a face to take in,” says Dr James Dunn, lead author on the research that published today in the journal Proceedings of the Royal Society B: Biological Sciences.
“They’re not actually seeing more, instead, their eyes naturally look at the parts of a face that carry the best clues for telling one person from another.”
This article is interesting but leaves out the fact that facial recognition takes place in a small part of the brain which works wholistically with faces; that is, it is able to grasp an entire face as a whole.
People who are good at face recognition have good brains in this area. People who are bad at it have not-so-good brains in this area.
Interestingly, this area of the brain is close to our orthographic area, the area where written words and graphic signs are identified or produced.
And the two areas can borrow real estate from each other.
One result of this is some people when learning how to read and write can lose some of their face-recognition abilities, to make room for the orthography.
Here are a couple of related articles, focusing on face-blindness. Face-blindness (memory) test and Prosopagnosia — ‘face-blindness’ — described.
Facial recognition is interesting and plays a major role in our social and subjective sense of how we function.
Everybody is somewhere on the spectrum of good-to-bad facial recognition skills.
As the article above states, correctly, you cannot train yourself to be better at face-recognition (because it is a wholistic skill ensconced in the architecture of the brain).
Many people with poor facial recognition skills are not aware of their deficit.
It’s a good idea to take one or two of the free online tests for prosopagnosia, the clinical word for face-blindness.
If you are good at it, you are probably pretty good socially.
If not, you may have a social deficit whose origin you were not aware of.
I took a couple of those tests some years ago and thought they were ridiculous because there is no way, I thought, anyone could do it.
Even after that it took me a few more years to recognize I really suck at recognizing faces.
Parents and teachers should be aware that some of the children they are dealing with may be very intelligent but also very bad a face-recognition.
Oliver Sacks and Brad Pitt both have prosopagnosia, so the company is not so bad. ABN
A group of Lithuanian lawmakers has proposed banning family reunification for immigrant workers legally employed in the country, arguing that the policy change is necessary to prevent a large influx of nonworking migrants.
Ten members of the Seimas registered amendments on Wednesday to the Law on the Legal Status of Foreigners that would remove provisions allowing residence permits for family members of foreigners already living and working in Lithuania.
One of the bill’s authors, Vytautas Sinica of the far-right National Alliance party, said that without this change, Lithuania could see around 60,000 additional immigrants arrive starting next year through family reunification, placing a financial burden on taxpayers.
“By 2026, three years after the start of mass immigration, around 60,000 additional foreigners could come to Lithuania through family reunification. These would no longer be labour migrants but mostly nonworking persons,” Sinica said in a statement.
Full video:
Owens has done some of the best reporting and analysis of Kirk’s assassination. She has inside information, a wide range of crowd-sourced information, and a very personal animus driving her forward. She also happens to have exceptional rhetorical talent and is thus able to weave an ongoing narrative clearly, with gravitas and scathing humor. ABN
A fascinating Swedish study claims to show that:
…the sense of agency for speech has a strong inferential component, and that auditory feedback of one’s own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops.
The source of that quote can be found here: Speakers’ Acceptance of Real-Time Speech Exchange Indicates That We Use Auditory Feedback to Specify the Meaning of What We Say.
In an article about the study above—People Rely on What They Hear to Know What They’re Saying—lead author Andreas Lind says that he is aware that the conditions of their research did not allow for anything resembling real conversational dynamics and that he hopes to study “…situations that are more social and spontaneous — investigating, for example, how exchanged words might influence the way a… conversation develops.”
FIML partners will surely recognize that without the monitoring of their FIML practice many conversations would veer off into mutually discordant interpretations and that many of these veerings-off are due to nothing more than sloppy or ambiguous speech or listening.
If speakers have to listen to themselves to monitor what they are saying and still misspeak with surprising frequency, then instances of listeners mishearing must be even more frequent since listeners (normally) do not have any way to check what they are hearing or how they are interpreting it in real-time.
That is, listeners who do not do FIML. FIML practice is designed to correct mistakes of both speaking and listening in real-time. FIML queries must be asked quickly because speakers can only accurately remember what was in their mind when they spoke for a short period of time, usually just a few seconds.
The Swedish study showed that in a great many cases words that speakers had not spoken “were experienced as self-produced.” That is speakers can be fooled into thinking they said something they had not said. How much more does our intention for speaking get lost in the rickety dynamics of real conversation?
This study is small but I believe it is showing what happens when we speak (and listen). Most of the time, and even when we are being careful, we make a good many mistakes and base our interpretations of ourselves and others on those mistakes. I do not see another way to correct this very common problem except by doing FIML or something very much like it.
In future, I hope there will be brain scan technology that will be accurate enough to let us see how poorly our perceptions of what we are saying or hearing match reality and/or what others think we are saying or hearing.
It is amazing to me that human history has gone on for so many centuries with no one having offered a way to fix this problem which leads to so many disasters.
Basic anthropology — tribal beliefs and behaviors migrate along with the tribes. Not difficult to discuss at all unless your tribe (Europeans) have been mind-controlled into not seeing the painfully obvious, which they have been. I highly respect the young speaker above, as saying what she is saying actually takes courage in Europe today. ABN
I have not watched this yet, but she is usually funny if you are in the mood. ABN