Images in Eye Reflections

In Japan, a cyberstalker located his victim by enhancing the reflections in her eye, and using that information to establish a location.

Reminds me of the image enhancement scene in Blade Runner. That was science fiction, but now image resolution is so good that we have to worry about it.

Posted on July 27, 2020 at 9:46 AM • 42 Comments

Comments

FranklyJuly 27, 2020 10:03 AM

I don't think we should accept this stalker's claims at face value. He may have had other information as most young persons today have a broad digital footprint. I'm particularly skeptical about determining the floor by the angle of the sun.

MikeAJuly 27, 2020 10:17 AM

@Frank:

You mean the FBI and DEA are not the only users of "Parallel Construction"?

stineJuly 27, 2020 11:02 AM

The latest is that companies are scanning Facebook, etc. looking for pictures that contain the pads of fingers so they can extract the fingerprints from the photos. I'd be surprised if they aren't scanning the doors of cars in parking lots.

DPJuly 27, 2020 11:13 AM

@Frankly - I don't know, it's difficult to underestimate the time and persistence of the obsessed internet. Makes me think of the Shia Labeoouf vs 4Chan game of capture the flag. I would question what social media would have a high res enough image. The images my daughters take using the snap app are awful.

1&1~=UmmJuly 27, 2020 11:18 AM

@Frankly:

" I'm particularly skeptical about determining the floor by the angle of the sun."

Do you know what a "bubble sextant" is?

Or how a more traditional sextant works?

How about a variation called a theodolite?

https://en.wikipedia.org/wiki/Theodolite

In these instruments there is a mirror or similar that alows the very precise angular difference between objects. It just so happens that the human eye also acts like that mirror.

When you have a minimum of three known refrences and the angles you simply use a simple protractor to move the angles where the points align. That gives you a position.

Most Sail Boat skippers who sail off shore --out of sight of land-- know how to do this so that when they come back towards shore they can get a position fix so they can avoid hazards as well as know what course to set to get to their chosen port etc.

Astronomers for instance if you give them a photograph taken at night of stars can if they know when the photograph was taken give you a position fix down to a very small area.

Oh and building surveyors used to be taught to use star sights over a period of days to fix not just a position within a foot but also hight to a similar accuracy. These days they use digital systems to get greater accuracy on the angles to get position and hight, because buildings have to be built to "approved plans" get it wrong and you can be told to knock the building down, which can be a tads expensive, so not a mistake you want to make.

The use of digital systems to enhance angles can be easily done on a home computer if you know what you are doing.

Whilst I'm not saying the attacker is not telling a pack of lies the simple fact is there are plenty of people out there who know how to do it.

Oh and do not forget people in the military who know how to calculate "firing solutions" even during WWI the navy gunners could work out how to hit an object about 30ft by 30ft from 90,000ft away using just optical instrumrnts and either lookup tables or a mechanical computer that served the same purpose.

c1ueJuly 27, 2020 11:54 AM

Honestly, this seems like nonsense.
All the major platforms de-rez posted pictures for storage efficiency purposes.
The likelihood that the de-rez would negate 12M+ megapixel artifacts would seem to be nearly 100%.
It is possible that an extreme closeup could have feature sizes large enough to be preserved - or more likely a very unique structure was reflected.
Nonetheless, the idea that social media pics can be analyzed: very unlikely.
Given: the platforms themselves have access to the original pictures so would be less affected. But even then: outside the top 10%/Apple fans, I wonder just how many people have the newest hardware and are uploading the biggest pics. Huge pics also require significantly larger upload times - such that it wouldn't surprise me if the apps that upload are doing the derez first...

echoJuly 27, 2020 12:55 PM

The human eye is a bit dodgy and perception can be altered by habit and mental blind spots. Taking a picture of a room is very good way to discover if it is as clean as you think it is and how much clutter you have. Give it a try and discover how up to our necks we are in filth and gape at how we're not tripping over garbage every few steps. Keyboards are the worst for this. They're positively crawling. I don't even want to think what putting a carpet under a microscope will show.

SpaceLifeFormJuly 27, 2020 2:08 PM

Reminds me of the pic that Trump took of the wall mounted monitor in the SCIF.

Totally illegal, but that's another story.

You could see due to the flashback reflected from the monitor, that there were 3 others next to him.

I could not ID them however.

CatherineJuly 27, 2020 3:48 PM

Important for social media networks to automatically strip EXIF data and downgrade resolution of public photos.

Retaining EXIF info and high-resolution in images should be an opt-in choice, with an explicit warning about privacy risks.

Some networks do some of these things. It should be standard.

DaveJuly 27, 2020 7:31 PM

"Zoom in. Enhance. Enhance. Pan right. Zoom in. Enhance. Zoom in. Zoom in. Enhance. Pan right. Zoom in. [Pause] Ah, it's a silicon atom!".

Yeah, it sounds like parallel construction for some far less exotic technique.

Clive RobinsonJuly 27, 2020 8:28 PM

@ Bruce,

It appears the story has suffered the Chinese whispers effect as it got copied from one media outlet to another. After a little hunting around the story appears to be the followibg

Apparently the 26/27 year old unemployed attacker saw a "train station" not a "bus station" in 21 year old J-pop idol Ena Matsuoka's eyes and then used Google's service to search through all the train stations untill he found the one that he thought matched.

It appears he then went there and recognised her window coverings at a window from another (I'm guessing taken inside) selfie. Not that the floor is realy relevant.

Because he apparently waited outside the building and followed her into the building lobby and waited for the door to close behind them before grabing her and first stifling her then tying her up with a scarf, which caused brusing to her face.

What is not clear however is if the photo was up on a social network site, or on another site and linked to from a social network site...

There are pictures of Ena Matsuoka up on various web sites, and she appears to have "liquid eyes" so may wear contact lenses. Anyway her eyes are certainly reflective, in a couple of pictures you can make out not only the phone the picture was taken from but in one case there are also three people visable who would be standing in front of her when it was taken.

So it looks like there is a possability the story has a germ of truth in it.

echoJuly 27, 2020 10:02 PM

Pre-processing image data and location identification methods seemed so obvious I didn't bother commenting. In fact it felt so obvious and given I've been yapping nonestop thought I'd give it a rest.

As for Chinese whispers yes it is a thing and in some cases institutionalised. It's especially bad with organisations with hierarchies and wildly differing authority between "senior" and "junior" staff, time pressures, people never writing up notes properly or reading them, internal echo chambers and failure to double check with authoritative texts or outside "primary" data sources. This subject areas is familiar with sociology professors and organisation schisms are familiar to police. I know because they told me.

One of the UKs more pompous QCs accused me of "talking nonsense" when I brought the subject up. He was already in a mood after I dismantled judges rules as "flummery" and both concealing information and revealing information as an artifact of slightly badly designed rules even when a university lecturer in technology systems backed me up. Yes, it's all quietly logged for another day.

I think a lot of decisions and the badly implemented systems which flow from this is because the prioritites of the staff are always considered first whether it's the latest new wheeze or an establish comfort zone. Ship first and patch later. The problem for some cases is that there is not always a "later".

WinterJuly 28, 2020 12:53 AM

"Kassem G already covered this"

The miracle of multiplication of pixels.

I once saw a French movie where they reconstructed the address on an envelop from the reflection in a watch at some distance.

But this is even more miraculous.

WeatherJuly 28, 2020 3:56 AM

It was bull the first time you posted it ages ago, not much has changed, maybe linking up multiple carames might be able to do something.

CuriousJuly 28, 2020 10:20 AM

I can't help but wonder how easy it might be to try frame somebody for a crime this way by doctoring images showing. And even if you can't successfully frame someboyd, perhaps you could start a fake investigation/surveillance by falsely implicating somebody to a crime or whatever.

Singular Nodals July 28, 2020 11:21 AM

It works for me, that is, when I ‘m wearing mirror shades.

Some smart person could do a quick calculation using intervals of values for eye reflectance, camera distance, camera resolution, strength of ambient light etc., to see what if any conditions allow this to work.

And the question affords the opportunity for many happy hours of personal experimentation.

Post your results on social media !

WeatherJuly 28, 2020 2:17 PM

I'm guessing it was a close up photo of the face, guessing one eye would be 1000*1000 pixels one million colour palette times 1.5 , would that be enough data to image process it?

echoJuly 28, 2020 2:42 PM

This is like a modern spin on the now debunked Victorian forensics of optography. Apparently, it was used as a plot device in fiction at the time.

vas pupJuly 28, 2020 3:27 PM

Q1: If contact lenses present then this is closer to the fact?

Q2: Is it possible to replicate palm of big shot (e.g. President, general, etc.) from same media pictures by enhancing it and get access to the palm reader and moreover get some information about person health condition out of palm lines?

SteveJuly 28, 2020 6:33 PM

Disregarding how factual or not is the story, I'm sure it can be done giving enough pictures of your target. And here we are talking about a human being with the right motivation / wrong obsession. It would get easier with machine learning, IA, you name it.

CuriousJuly 29, 2020 6:53 AM

I wonder if there will come a time where you could have contact lenses that works like as a video recording device, of basically what you look at. Maybe even storing the recorded video onto the contact lens itself. :) How it could even be possible to make a thin contact lens act as a camera I have no idea, maybe it isn't something that is deemed possible, but perhaps it is all down to performance and specs.

echoJuly 29, 2020 7:14 AM

If I recall in Gerry Andersons' Doppelgänger (1969) (a.k.a. Journey to the Far Side of the Sun) one character had a camera inside an artificial eyeball.

Clive RobinsonJuly 29, 2020 12:06 PM

@ Curious,

How it could even be possible to make a thin contact lens act as a camera I have no idea, maybe it isn't something that is deemed possible...

The first question you should always ask yourself is,

    Does the laws of physics alow it?

Well you can sit down and work it out.

If you assume the light comes in in parallel then a convex lens will bring it at some point to a place of focus this we learnt depending on our schools some time from the age of six or so with nice diagrams, along with other optics that make pretty raindows that every child is supposed to be mesmerized by at some point.

Now a little later in your education you get taught about 100% internal reflection which causes the "red eye effect" in photos where the flash is to close to the camera lense.

Supprisingly this tells you two important things... The first is that the retina is actually not at the point of focus but slightly away from it and curves to match the focus lines by being equidistant from the back of the lens.

To cut a long bit of maths short you can show that the back side of the contact lense will be like the retina curved equidistant from the focus point.

Therefore the image will be reflected from the retina back onto the inside face in a way where it can be treated as though the image will be in focus on it.

Obviously fun as this is it kind of serves no practical purpose because the light will just go back through the lense and return back to ehence it originated minus some losses...

Well not quite. Some materials are transparent to visable light but not to other wave lengths. Thus you could as is currently done put a thin metalic coating on the back of the lense like gold etc. This lets visable light through but not other wavelengths which is why it's used on building glass for high efficiency buildings.

Now you might remember the "green house effect" visable light comes through the atmosphere and hits the earths surface where it gets frequency translated and reflected back towards the sun. However... It's frequency range now will not pass through the upper atmosphear and instead gets reflected back to earth where it causes further heating effects and so on. The result is the heat gets trapped just as it does by the glass in a green house...

Take a moment to think about that.

Now imagine a semiconductor that is transparent to visable light but not to other non visable spectrum where it actually responds photovoltaicly. Such semiconductors do exist even though their photovoltaic efficiency is not particularly good.

Thus if you were to lay down the semiconductor on top of the gold or equivalent metal then put an upper layer of say tin dioxide on top you would get a very poor response semiconductor camera element.

So yes the physics do alow you to make such a sensor, all you have to do is find the right "magic elements" and some way to get the power in and the information out... But they are in effect just implementation issues.

So hopefully that answers your question.

echoJuly 29, 2020 12:46 PM

@Clive @Curious

So yes the physics do alow you to make such a sensor, all you have to do is find the right "magic elements" and some way to get the power in and the information out... But they are in effect just implementation issues.

Langmuir–Blodgett thin films are funky. Apparently, quantum dots can be used as camera sensors too. I'm just handwaving. There are alot of technical issues to be overcome even with normal sized camera.

https://spectrum.ieee.org/consumer-electronics/audiovideo/move-over-cmos-here-come-snapshots-by-quantum-dots

And quantum dots have another advantage. Because they absorb light better than silicon, it takes only a thin layer atop the readout circuitry to gather almost all of the incoming photons, meaning the absorbing layer doesn’t need to be nearly as thick as in standard CMOS image sensors. As a bonus, this thin, highly absorbing layer of QDs excels in both low light and high brightness, giving the sensor a better dynamic range.

...Quantum-dot–based cameras have huge potential to bring infrared photography mainstream, because their tunability extends into infrared wavelengths.

[...]

Eliminating hybridization means that the resolution—the pixel size—can be less than the 15 µm or so needed to accommodate indium bumps, allowing for more pixels in a smaller area. A smaller sensor means smaller optics—and new shapes and sizes of infrared cameras at a far lower cost.

MarkHJuly 29, 2020 2:45 PM

@Curious, echo, Clive:

I regret to say that the account offered by Mr Robinson of optical imaging within the eye is mistaken on each point.

However, an answer to the question posed by Curious lies in a radio frequency technology which Clive surely knows better than any of us: phased array radar, which can form high-resolution pictures using flat panels.

An optical counterpart to phased-array radar, which is not identical but relies on similar concepts, is the light field camera -- essentially, a planar array of very tiny cameras.

I was startled a few years ago to read about a very expensive camera which enables change of focus after the picture is taken. A model of this on the market has a "smart" focus mode in which it can render everything in the image -- whether close to the primary lens or far away -- in sharp focus.

In principle, then, any surface might be turned into a camera capable of recording images. The advantage and disadvantage of doing so with contact lenses is that this camera would track eye movements, which both reflect the person's attention, and tend to be very abrupt and jumpy.

An eyeglass lens, or a hat -- something following head movement rather than eye movement -- might be more convenient.

I much doubt whether anybody could fit (let alone conceal) such a system in a contact lens, or even an eyeglass lens today. But in a few decades? Probably it will happen.

echoJuly 29, 2020 4:54 PM

@MarkH

Lightfield cameras are basically why I was suggesting thin film quantum dots. I don't know enough about the technology limits or the maths behind it and am too lazy to go further.

Meta materials may be able to work past diffraction limits too. As for working out whether quantum dots and metamaterials can work together see laziness etcetera.

Clive RobinsonJuly 29, 2020 10:39 PM

@ MarkH,

I regret to say that the account offered by Mr Robinson of optical imaging within the eye is mistaken on each point.

To make such an empty statment is unwise, especially when people can use a pencil and graph paper to show otherwise.

JonKnowsNothingJuly 30, 2020 1:56 AM

@Curious @Clive

re: contact lenses that works like as a video recording device

iirc(badly)

There is research about using the natural eye and processing images directly in the brain for some forms of blindness or partial blindness.

There have been several attempts with mixed results in how to capture images and get them to undamaged parts of the vision system.

Generically, they imbed a form of light-dark capture system inside the eye and a second processing system attached to the brain areas either by cable implant or by an implant with a control piece attached to the exterior of the skull.

The specific methods vary but the concept is to capture the image and transmit it directly onto the visual cortex or onto the optic nerve, the implants being set just past the damaged areas allowing the non-damaged areas to continue functioning.

In some trials a previously sighted person as able to discern light/dark and distinguish some shapes or blobs.

In more recent studies greater and finer details were perceived, again with previously sighted persons.

For those who either have no sight at birth or lost their sight while very young, the mental imagery that is common to sighted persons is different. Gaining sight a later age is not always enough to re-set the brain into ways of seeing common objects.

The research is very interesting not just because of the technical achievements but the entire "what is sight and what is it we are seeing"(1) aspects of brain development.

It's not a contact lens though.


1. Buddhist doctrines spend a lot of time discussing perception. One of the aspects is that even though we collectively agree that A == Apple, there is no way to know exactly how another person "sees" the apple. We mutually agree what we are seeing is an apple but is it? Schrödinger's cat several centuries earlier.

echoJuly 30, 2020 2:50 AM

@JonKnowsNothing

The brain isn't always as cleanly organised as people think. For instance with some people mathetmatical reasoning is carried out in part by the visual cortex. "Brain plasticity" is a another topic and like you say has its limits.

Buddhism is pretty simple framework differing with what culture it is embedded in. I think you're referring to Zen Buddhism and it's really more about "words can convey meaning but they cannot confer understanding". Much like Samurai doctrine there are two major competing standards and lots of subdivisions even within Zen. Then there is "learning by observation". There are academic papers on this. At some level the brain mimics and attempts to harmonise with other people. The more senses are involved the better lessons are remembered. There is also the circular nature of learning and where the master and student swap places. Not all of it translates especially well to Anglo-Saxon cultures. Then there is Chinese Buddhism or Chan which Zen is actually ripped off from. There are also Daoist influences in there. China will tend to lean more towards Confucian influences. Japana towards Shinto. It's a big topic and worth taking a round trip through Doist thinking and animistic religions too.

In contrast I can assure anyone that the rote learned silo mentality of the UK state sector pretty much kills stone dead any joy you have in life. Backward thinking closed minds and comfort zones riddle the state sector. This isn't just talk. I have the reports logged in my evidence database.

Speaking of which Zatoichi, a blind swordsman, is a long running and very popular Japanese franchise including a series of films and television series. There is also the spin-off movoie "Ichi" (In Ichi, a blind female musician who is rescued (and later trained) by Zatoichi travels through Japan to find her mentor. Source: Wiki)

There have been some experimental game engines using sound as a location device. There is also a blind gamer who plays Call of Duty.

https://www.engadget.com/2018-08-02-blind-call-of-duty-player-thousands-of-kills.html

So how does one play video games while completely blind, especially a frantic shooter like CoD: WWII? Mostly by hearing footsteps. TJ tracks enemy players by sound alone, using surround-sound headphones, dialing down the background music and choosing in-game perks that enhance audio feedback. He navigates maps similar to how a bat or dolphin might, but instead of sending out high-frequency pings to understand the environment, TJ shoots ahead and listens (items, walls and the ground all have different sounds when shot).

Then there are people who have Hyperthymesia and remember every single day of their lives. This raises the question of "What is a camera?"

People who remember every second of their life | 60 Minutes Australia
https://www.youtube.com/watch?v=hpTCZ-hO6iI

Imagine being able to remember every minute detail of your life. You can recall what the weather was like, what you were reading or what you wore to the shops at any minute, any hour or any day stretching back decades. It sounds like some kind of parlour trick, but it's actually a real and very rare medical phenomenon.

MarkHJuly 30, 2020 6:09 AM

@Clive et al:

An analysis, if you're interested:

If you assume the light comes in in parallel then a convex lens will bring it at some point to a place of focus

Yes: in general, convex lenses can converge parallel light rays to some approximation of focus.

100% internal reflection ... causes the "red eye effect" in photos where the flash is to close to the camera lens

No: although the specifics depend on the substances through which the light is passing, total internal reflection generally occurs for light rays nearly parallel to the surface (or at least, at not very steep angles).

The red-eye effect usually occurs when the person or animal in the flash photo is looking toward the camera; in this case, the light rays reach the fundus (rear structures of the eye) at roughly a right angle. In such case, total internal reflection is impossible.

In those red-eye photos, the fundus is reflecting like an ordinary mirror used at home, on automobiles, etc. -- except that it's a very poor mirror. The light is strongly attenuated in intensity, with shorter wavelengths (bluer colors) attenuated even more strongly.

the retina is actually not at the point of focus but slightly away from it ...

When an eye is focused on (or near) a particular object, rays from that object must come to a focus very nearly at the retinal layer containing the rods and cones (at least, in the macular region which is nearly on-axis, and enables the perception of detail). If this condition were not met, then the object (necessarily) could not be seen clearly by the observer.

[the retina] curves to match the focus lines by being equidistant from the back of the lens

Yes, as a rough approximation ... but no, probably not accurately, and the significance of this curvature is perhaps misconstrued.

In general, optical systems producing a real image have curved focal surfaces (see "Petzval surface" for more about this). To flatten the focal surface for photography is a tough design problem, particularly vexatious in the days before electronic computers.

A "camera" with a very wide field of view (like the human eye) can be vastly simpler and image better if the focal surface has plenty of curvature.

As far as I've been able to look up, the actual radius of curvature at the macula is perhaps 30% shorter than would be needed to keep its various zones equidistant from the eye's lens. If I got that curvature wrong, I welcome correction!

the back side of the contact lens will be ... curved equidistant from the focus point

No: the posterior face of a contact lens must closely match the curvature of the cornea. The anterior radius of curvature for a typical human cornea is about 8 mm, whereas equal distance from the focal point (i.e., the retina) would require a radius greater than 20 mm ... not even close.

the image will be reflected from the retina back onto the inside face in a way where it can be treated as though the image will be in focus on it

Most decidedly not! It's pretty straightforward, using that pencil and graph paper, to illustrate that rays reflected back from the fundus will have an angle of divergence matching the angle of convergence they had as they approached the retina.

Therefore, if those rays were very nearly parallel as they approached the cornea, so will they also be very nearly parallel as they leave the cornea as reflected light. An eye looking at an object forms no image of that object anywhere near the cornea.

I suggest that this situation is analogous to the "reciprocity" property of radio antennas, a matter on which Clive will be far more versed than I.

In fact, a variety of optical tests and calibrations rely on this property of an ideal optical imaging system: if you place a plane mirror at the focal point, perpendicular to the optical axis, then a wavefront reflected back will have the same form as the impinging wavefront which gave rise to the reflection.

A homely demonstration that this applies accurately to eyes, is our friend the "red eye effect". The reflected light from the fundus is extremely directional: in many cases, an extra displacement of the camera lens from the flash of only a few centimeters, will greatly reduce the effect.

The light rays from the flash reach the cornea very nearly parallel ... and they reflect back very nearly parallel. A real image is created in or very near to the retina, and nowhere else in or on the eye.
___________________________

Disclaimers:

1. Though I have a life-long interest in the theory and practice of optical imaging systems, I'm no expert. I welcome factual corrections of any errors I made.

2. I don't recall a single instance when, before doing something, I paused to consider "is this wise?" As a man in my 60s, I certainly don't intend to change that now!

echoJuly 30, 2020 7:02 AM

@MarkH

Arguing over optical imaging systems isn't my idea of fun. Another thing is theoretical work versus real world photography versus vision are overlapping but different topics. I'm personally more interested in the vision system and yapped about this before throwing the draft away because I talk too much. I did post about the new gigapixel telescope. The youtube went into some nice detil about the design and reasoning for design decisions of the optical imaging system. Not a peep about this... The thing is are you trying to enhance learning or argue?

I'm not personally going to correct anyone who sees everything as an argument. If there's a bone of contention it's just better to write up your own alternative chalk and talk. It's more readable and sinks in better.

The vision system is quite complex and not fully understood. I could give a fairly boring trot through the high resolution colour bit versus the low resolution black and white movement detection bit, preprocessing both in the retina and optic nerve. Vision processing doesn't just happen in the visual cortext but involves other parts of the brain. Short version: we don't see we perceive. That gets into massive discussions about the neuro-psycho-social stack and brain plasticity and illusions and social engineering and and maths including ratios and probabilities and lots of handwavy stuff about heuristics and law for anyone who wants to push it. This is an area where you can both get into a lot of science and also tip into art.

It's slightly orthogonal to the topic of camera but also a continuation to discuss "ghosting". Ghosting being the art of making yourself "invisible" to cameras. This can be via various disguise techniques to thwart recognition or the use of props to block being recorded or simply being in camera blind spots. There was something on a news aggragation site the other day this week but I lost the link.

Getting back on topic I think it was "Who?" who posted a link on commissioning SKIFFs. The idea of building a secure room or securing a room so pictures can be safely taken without disclosing location orperhaps fakign a location is another topic of discussion. This may include but is not limited to modifying colour temperature and light direction, to architecture, artifacts, colours, and vegetation and tree coverage and reflected scenes. Another angle on the problem is frequency of publication and variety. This as you can imagine begins to tip in broadcasting and marketing. Anti-harassment and anti-stalking strategies are a softer still cultural and psychological and social issue.

There is also the issue that cameras can lie as much as we lie to ourselves which is another security topic. You won't find that in any books on optical imaging systems.

JonKnowsNothingJuly 30, 2020 12:00 PM

@echo
re:

I think you're referring to Zen Buddhism and it's really more about "words can convey meaning but they cannot confer understanding".

Think of language syntax. Generally verbs for action, and descriptors for objects and attributes.

We use the same set up for computer-human language interfaces including pesky punctuation and arcane symbols.

Person - Object - Verb
(the order of presentation varies by language)

Using a "table" presents an easier way to grasp the idea.

A table is a construct that we all recognize and agree This is a Table. We agree it exists and that we can put our food and coffee on it or use it for holding computers or other items.

Table is made of legs, a top, a material.
e.g. Wood for legs, wood for top.

At what point does this become a "table", where IS the table?

The table is a constructed item that does not exist outside the form of the legs and the top but only in conjunction with them. So we give it a name: a table.

Except "table" does not exist. It can be constructed and destructed but it does not exist in and of itself.

What we agree to call a table may not be a table at all. So when you "see a table" what is it you are "seeing"?

As you point out, words alone do not provide understanding. Words may not be able to express an understanding. We are limited to words and what we collectively agree they mean, but that does not mean the understanding is accurate.

People who never learned to associate words with objects have different understanding of what is around them. We can teach "Water" and get a response to "Water" but the internal understandings are different.

disclaimer: I am not a Buddhist and my understanding is about the level of table. ymmv

ht tps://en.wikipedia.org/wiki/Helen_Keller

"I did not know that I was spelling a word or even that words existed
...
when she realized that the motions her teacher was making on the palm of her hand, while running cool water over her other hand, symbolized the idea of "water"
...
Writing in her autobiography, The Story of My Life, Keller recalled the moment: "I stood still, my whole attention fixed upon the motions of her fingers. Suddenly I felt a misty consciousness as of something forgotten — a thrill of returning thought; and somehow the mystery of language was revealed to me. I knew then that w-a-t-e-r meant the wonderful cool something that was flowing over my hand. The living word awakened my soul, gave it light, hope, set it free!"

lurkerJuly 30, 2020 12:42 PM

@echo

Arguing over optical imaging systems isn't my idea of fun.

Isaac Newton wrote the book on it, then held off publishing for many years because he was afraid his mates would laugh him out of the club.

echoJuly 30, 2020 5:21 PM

JonKnowsNothing

As you point out, words alone do not provide understanding. Words may not be able to express an understanding. We are limited to words and what we collectively agree they mean, but that does not mean the understanding is accurate.

People who never learned to associate words with objects have different understanding of what is around them. We can teach "Water" and get a response to "Water" but the internal understandings are different.

Yes, something like this or close enough. There's one saying like "How can you describe the taste of chocolate to someone who has never tasted chocolate?" (and variants of which isn't unlike your example. Then there is intuition and truth. I think beyond this point it gets handwavy and I shy away from religions.

Arthur C. Clarke in Mysterious World gives three kinds of mysteries.

  1. Mysteries of the First Kind: "Something that was once utterly baffling, but is now completely understood".
  2. Mysteries of the Second Kind: "Something that is currently not fully understood and can be in the future".
  3. Mysteries of the Third Kind: "Something of which we have no understanding".

Of course, artists and designers may wish to elucidate truths in ways which words alone may not be able to convey, or confound us.

MarkHAugust 2, 2020 4:23 AM

@echo:

I knew a fellow who, when he heard on the radio that a legendary operatic soprano had died, rushed to the library of one of the world's most famous universities, found her name card in the card catalog, and wrote "1977" into the blank space to the right of the hyphen for her biographical dates.

Eccentric? Certainly!

Disturbing? To some, I'm sure.

Anyone's idea of fun? I can't imagine so ...

He was educated as a musical scholar, for whom the accuracy and completeness of records is paramount.

A notion which I believe that Clive and I have in common, is that there may be some few readers out there for whom these comment threads are a source of learning and perhaps a sort of reference.

To the extent that this is true, correcting the record may be an endeavor of worth.

I welcome factual corrections when I write in error. They are a gift contributing to my learning (as well as the learning of others).

When I see a factual claim which I believe might significantly mislead a person relying on it, I often write a correction.

I accept that that is arguing, in the same sense that scholarly journals contain arguing: people add corrections and revisions to the work of others, and hopefully knowledge is increased a little.

If you find my comments rebarbatively pedantic, then I respectfully request that you skip over them. I expect you would be in good company!

TedAugust 5, 2020 8:54 PM

Cinematographers and photographers have used reflections in the eyes of subjects forever to reverse engineer the lighting of a shot that they like the look of. Often you can see what lighting instruments are being used, their placement and what light modifiers are being used.

Along the same line and related to location finding based on photographs, apps like The Photographer's Ephemeris allow you to see the angle, height over the horizon etc of the sun on any given day at any given location.

This is a great blog. Thank you Bruce for helping non-techies such as myself access these topic!!

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Sidebar photo of Bruce Schneier by Joe MacInnis.