"One wrapper to rule them all."
photo? photo?

"Mr. M. Frydman, an engineer, remarked on the subject of Grace, 'A salt doll diving into the sea will not be protected by a waterproof coat.' It was a very happy simile and was applauded as such. Maharshi added, 'The body is the waterproof coat.'" -- Talks With Sri Ramana Maharshi
Identity is the artificial flower on the compost heap of time. -- Louis Menand, "Listening to Bourbon"

What are you, at your core? Awareness is a key part, one that we share with many animals, and can feel when looking into a dog's eyes. I am working on creating that feeling of awareness with photos, so that you can look into the eye of the computer screen and feel something responding to you. To do this, Phobrain uses a sort of downloaded brain, in the form of neural nets trained on identified photo associations, and adds variation based on your mouse movement, click rhythms, and a DNA dynamics simulation to give it a (sort of) tin man's heart.

How does it work? You look at two photos, and see if you can figure out what they have in common. Then you see what that pair has in common with the next one, and what story might emerge, like the dreams of a dog scratching an electronic itch.

In the default Browse Mode, just click on one of the photos to search for the next pair. Clicking on the left photo generates the next pair using neural nets (~10M pairs), while clicking on the right photo chooses the next pair from among ~100K positively-classified pairs used to train the nets. Clicking in the grey area just above the photos chooses a pair of unseen photos at random. When clicking on the left photo, your click timing acts on the choices generated by the nets as if you are throwing a stone into a pond, to create an individual experience. (If you wave the mouse around or draw on the photo, it will also have an effect on choice.)

Screens. The screens show either portrait-oriented or landscape-oriented pairs. The landscape pairs can be either side-by-side or stacked. Side-by-side landscapes are recommended if a wide screen is available.

Search Mode. In this mode, unscreened pairs are formed dynamically in response to your choices and timing. Several options for choosing the next pair appear below the photos, some depending on whether AI or Golden Angle is chosen:

+ gives color similarity (using one of 10 algorithms). - gives color opposite. c chooses curated pairs. The purple options, Σ1 Σ1 Σ3 Σ4 Σ5 ΣΣ Σ𝓍 , apply various combinations of neural networks. The grey numbered options, 2 3 8 27 32K, apply the Golden Angle in spaces of the numbered dimensions. | chooses a pair completely at random. + chooses a match based on descriptions using keywords. In Search Mode, clicking on a photo results in a match to its neighbor (rather than replacing the pair), depending on whether you click on a corner (4 different color matching algorithms) or the center (keyword matching).

Clicking in the grey area next to the options, to the left of the yellow + or to the right of the green +, will cause any keywords shared by the photos, or color-matching algorithm used to choose them, to appear to the right of the options, for as long as the mouse button is held down. Clicking in the grey area next to a photo toggles it with the one that was there before. Clicking below the photos, in the grey area just above the options, toggles both photos with the non-showing pair. Holding this area down for a second restores the most recent pair.

The keywords used by the + option allow some storytelling, like a psychic crossword, or a therapist analyzing dreams. The common features could be people, things, colors, shapes, textures: whatever we think would make the most obvious and interesting connections as you hold both pictures in your mind. (Again, you can see the keywords that matched by clicking and holding the mouse next to the options.) The keywords for the features are scripted and discussed like characters in Sesame Street, merging points of view to create a hybrid 'brain'. Each photo is like a brain cell, connected to other photos by keywords they have in common. You explore this 'brain' when you use the + option.

Don't worry if you don't see any similarities at first — it's not perfect — but if you keep at it, you will start to see themes that last over a few pictures, then more will start making sense — it's like learning a language you find you already know. You can use another option to change the subject when you get tired of a theme, then return to + if you see something you want to pursue.

Looking at the photo on the left above, we might describe it with the words "woman hand phone face blue". If I click on + for it. I expect to see another picture with at least one of these features, but will blue-ness jump out for me on the next photo? As you go from picture to picture, it is a little like a crossword puzzle, matching up words instead of letters.

Now consider the photo on the right above: it is outdoors not indoors, in public not in private, the background has classic geometry, the real person in it is a boy and not a woman, and the color that jumps out is red instead of blue. On the other hand, there are two males in each picture, and there are representations of people (picture on phone, and statue). Perhaps the most interesting similarity between the two photos is that there is an interaction between a person (or people) and a representation of a person in each. This site can help build up your analytical abilities, although it does not do such a complicated analysis itself, and would be unlikely (we hope) to join these two pictures when the + option is used.

Like when learning a language, you can enjoy the view and watch for patterns to emerge.

Sessions: Each browser creates its own session, which should keep you from seeing any repeats of pictures within a given View.

The dog's eyes: My goal is to make the site smart enough so that it seems alive, like the feeling you get when looking into a dog's eyes. The fading image when you enter the slideshow is a gesture toward that goal. More concretely, a live molecular dynamics simulation is used as a sort of heart: it is affected by clicks on the slideshow page, and in turn affects the next picture you see; and it gives a continuing life to the site.

Can you make it browsable? I don't plan to add any kind of browsability like other excellent sites have. Can we upload pictures? I plan to add the ability to upload photos to the site.

Theory: (This describes the original, single-photo version.) A picture can tell a story that stands on its own and burns itself into your memory. Put two pictures together in sequence, and the 'picture' now exists in your memory as much as in your eye. The story becomes what is common to the pictures, and this competes for your attention with the other details. You may struggle to find a story and give up. My theory is that if you can find a story more often, you will become more engaged. According to a New York Times blog:

Japanese researchers found that dogs who trained a long gaze on their owners had elevated levels of oxytocin, a hormone produced in the brain that is associated with nurturing and attachment, similar to the feel-good feedback that bolsters bonding between parent and child. After receiving those long gazes, the owners' levels of oxytocin increased, too.

A more nuanced story about oxytocin from Wikipedia.

Related media and software

  • The No Words Forum threads photos on themes like Phobrain, but without a dynamic personality responding in the moment. Very interesting for the variety of viewpoints.
  • uses deep learning to hybridize pairs of pictures, creating novel effects analogous to combining Phobrain pairs in your mind.
  • Google Images allows you to search with words or pictures, and in principle Phobrain could use it for raw associations for its personality to select from (similarly for photo stock agency collections).
  • New deep learning image retrieval methods like Google's could be retrained with Phobrain principles, rather than simply used to feed Phobrain.



  • P. Kainz, M. Mayrhofer-Reinhartshuber, and H. Ahammer. IQM: An extensible and portable open source application for image and signal analysis in Java. PLoS ONE, 10(1):e0116329, Jan. 2015.
  • Is a two-dimensional generalization of the Higuchi algorithm really necessary? Helmut Ahammer, Nikolaus Sabathiel, and Martin A. Reiss, Chaos 25, 073104 (2015): doi: 10.1063/1.4923030
  • BoofCV, Peter Abeles, 2012. An open source Java library for real-time computer vision and robotics applications.
  • Web 3DNA for DNA model building
  • AMBER: Assisted Model Building with Energy Refinement, D.A. Case, R.M. Betz, W. Botello-Smith, D.S. Cerutti, T.E. Cheatham, III, T.A. Darden, R.E. Duke, T.J. Giese, H. Gohlke, A.W. Goetz, N. Homeyer, S. Izadi, P. Janowski, J. Kaus, A. Kovalenko, T.S. Lee, S. LeGrand, P. Li, C. Lin, T. Luchko, R. Luo, B. Madej, D. Mermelstein, K.M. Merz, G. Monard, H. Nguyen, H.T. Nguyen, I. Omelyan, A. Onufriev, D.R. Roe, A. Roitberg, C. Sagui, C.L. Simmerling, J. Swails, R.C. Walker, J. Wang, R.M. Wolf, X. Wu, L. Xiao, and P.A. Kollman (2016), AMBER 2016, University of California, San Francisco.
  • ParmBSC1 DNA Force Field Pérez, Alberto, Marchán Ivan, Svozil Daniel, Sponer Jiri, Cheatham Thomas E., Laughton Charles A., and Orozco Modesto. Refinement of the AMBER force field for nucleic acids: improving the description of alpha/gamma conformers. Biophys J. (2007) 92 (11), 3817-29.
  • NGL, a WebGL protein viewer. NGL Viewer: a web application for molecular visualization, Oxford Journals, 2015.
  • Modeling the shape of the scene: a holistic representation of the spatial envelope, Aude Oliva, Antonio Torralba, International Journal of Computer Vision, Vol. 42(3): 145-175, 2001. link
  • Jonathon S. Hare, Sina Samangooei, and David P. Dupplaw. 2011. OpenIMAJ and ImageTerrier: Java libraries and tools for scalable multimedia analysis and indexing of images. In Proceedings of the 19th ACM international conference on Multimedia (MM '11). ACM, New York, NY, USA, 691-694. DOI=10.1145/2072298.2072421 (


  • Ivan Karp, owner of OK Harris gallery, once told me, "What you have here is fine art photography." Memorial, with great remembrances to put that in context.

Articles and threads



Site History

  • 6/2018 Phobrain offline pending solving growth issue.
  • 3/2018 Phobrain's 'story' now branches into two plots, which can run in parallel, cross sides, and rejoin.
  • 2/2018 Simpler neural net models yield better effective accuracy, as high as 97%, vs. about 60% for the Siamese nets and about 20% for the other options. Neural nets are now used in Browse Mode (default) when clicking on the left-hand picture, while only training pairs are shown when clicking on the right-hand photo. About 250 networks are used.
  • 11/2017 Siamese neural nets: now 40; added keyword vectors to histogram models.
  • 10/2017 Added 10 siamese neural net models using color histograms in Browse and Search (AI) Modes. Added 700 of Bill's photos.
  • 8/2017 More complex personalities for Browse left/right options. Added 600 of Bill's photos.
  • 6/2017 Added 'Let pics repeat' option. Bifurcated Browse Mode into keyword-based choice of next pair, vs. mixed color/keyword-based choices.
  • 5/2017 Added Golden Angle spiral progression to Search Mode. Dimensional analysis. Retired single-photo screen, cutting database size in half. Added 500 more photos by Bill.
  • 4/2017 Created Browse Mode for the pairs views, chaining curated pairs by keywords, with the pair-forming options now available under Search Mode.
    Added 1700 more photos by Raf & Skot.
  • 3/2017 Added a free Pair Workbench page for loading your own photos from disk, and from web sites that allow it (e.g. imgur). Scales them to match/fit, lets you toggle with previous photos/pairs. Lets you save screenshots.
  • 2/2017 Converted View to switch between 4 tilings of one or two photos, consolidating earlier work and adding horizontal and stacked landscape tilings.
  • 1/2017 Added 'c'=curated pair option to pairs page, for manually-selected top 15% of over 25K pairs examined.
    Added a new archive by photographers Raf & Skot, with 1500 photos.
  • 12/2016 Added pairs page, with color-match and color-opposite functions.
  • 10/2016 Added exploration when drawing on the photo: the line you draw maps through color space to the next photo, based on averaged colors.
    Added 1700 more of Bill's photos, now caught up.
  • 9/2016 Added click-to-toggle region alongside picture to see previous photo.
    Added 1500 more of Bill's photos. Added 200 of Ellen's photos.
  • 8/2016 Revised keyword algorithm: postponed use of geometrical keywords like 'juxtapose' and 'angular' until 100 photos have been seen.
  • 7/2016 Unified keyword coding schemes and revised keywords.
  • 6/2016 Clicks on different zones of the picture invoke different image matching algorithms, analogous to touching a face.
  • 5/2016 A live DNA molecular dynamics simulation interacts with picture selection, acting as a beating heart for the site. The moving molecule.
  • 4/2016 Added 1400 of Elle's pictures. User mouse behavior now influences picture selection.
  • 1/2016 Elle classified the photos according to her own scheme.
  • 10/2015 Site (single-photo) launched with 6500 of Bill's photos, keywords, color analysis, and - | + .
  • 6/2015 Laptop bought, mothballed server-script random-selection prototype reimplemented in Java.
  • Quotations for everyday use


<——— oOo ———>
Listen, a woman with a bulldozer built this house of now
Carving away the mountain, whose name is your childhood home
We were trying to buy it, buy it, buy it, someone was found killed
There all bones, bones, dry bones

Earth water fire and air
Met together in a garden fair
Put in a basket bound with skin
If you answer this riddle
If you answer this riddle, you'll never begin

— Robin Williamson, Koeeoaddi There

In tribute to Lucy Reynolds, teacher of Graham technique and breeder of dogs.

© 2015,2016,2017 Photoriot.