photo? photo?

What are you, at your core? Awareness is a key part, one that we share with many animals, and can feel when looking into a dog's eyes. I am creating that feeling of awareness with photos, so that you can look into the eye of the computer screen and feel something responding to you.

How does it work? You look at photos, and see what is similar or different about them. Phobrain picks the next pair or single photo according to your instructions. In the default pair Browse Mode, just click on one of the photos to search for the next pair. Phobrain will try to pick a pair with at least one photo matching the photo you have chosen. These pairs have been hand-screened for interest. There is also a rawer Search Mode:

In this mode, there are more options for choosing the next pair, which are described in detail below. Further, Phobrain adds variation based on your click rhythms and a DNA dynamics simulation (a sort of heart) to make it alive.

Single-photo Screen. The original screen: a single photo at a time, vertical or horizontal, with the option to draw on the photo or click on center or corner areas of it to search for matches in color space in different ways. The other options are shared with Search Mode on the double-photo screens: - | + , for color-opposite, random, or keyword match. You can see what color algorithm was used, or what keywords matched, by clicking and holding down the mouse in the space next to these options. You can toggle with the previous photo by clicking in the space on either side of the photo.

Double-photo Screens. These are: two portrait-oriented photos, side by side; two landscape-oriented photos, side by side; and two landscape-oriented photos, stacked one above the other. With the double-photo screens in Search Mode, clicking on a photo results in a match to its neighbor, and the options for choosing the next pair are: + - c | + , where the added ones are + for color similarity (using one of 10 algorithms) and c for curated pairs. Clicking in the grey area next to a photo toggles it with the one that was there before. Clicking below the photos, in the grey area just above the options, toggles both photos with the non-showing pair. Holding this area down for a second restores the most recent pair. Clicking in the grey area next to the options, to the left of the yellow + or to the right of the green +, will cause any keywords shared by the photos, or color-matching algorithm used to choose them, to appear to the right of the options, for as long as the mouse button is held down.

The keywords used by the + option allow some storytelling, like a psychic crossword, or a therapist analyzing the dreams of a dog scratching an electronic itch. The common features could be people, things, colors, shapes, textures: whatever we think would make the most obvious and interesting connections as you hold both pictures in your mind. (Again, you can see the keywords that matched by clicking and holding the mouse next to the options.) The keywords for the features are scripted and discussed like characters in Sesame Street, merging points of view to create a hybrid 'brain'. Each photo is like a brain cell, connected to other photos by keywords they have in common. You explore this 'brain' when you use the + option.

Don't worry if you don't see any similarities at first — it's not perfect — but if you keep at it, you will start to see themes that last over a few pictures, then more will start making sense — it's like learning a language you find you already know. You can use another option to change the subject when you get tired of a theme, then return to + if you see something you want to pursue.

The - option tends to go back and forth between lighter and darker photos that don't have descriptive words in common. The | option is completely random (except that in single-pic mode, all options are restricted to a set of about 900 favorites for the first 100 pictures).

Looking at the photo on the left above, we might describe it with the words "woman hand phone face blue". If I click on + for it. I expect to see another picture with at least one of these features, but will blue-ness jump out for me on the next photo? As you go from picture to picture, it is a little like a crossword puzzle, matching up words instead of letters.

Now consider the photo on the right above: it is outdoors not indoors, in public not in private, the background has classic geometry, the real person in it is a boy and not a woman, and the color that jumps out is red instead of blue. On the other hand, there are two males in each picture, and there are representations of people (picture on phone, and statue). Perhaps the most interesting similarity between the two photos is that there is an interaction between a person (or people) and a representation of a person in each. This site can help build up your analytical abilities, although it does not do such a complicated analysis itself, and would be unlikely (we hope) to sequence these two pictures when the + option is used.

Here is a sequence of features chosen to match the next photo using + in a session in Elle's View.


Like when learning a language, you can enjoy the view and watch for patterns to emerge. After you have seen 100 photos in the single-photo Views, more abstract keywords like 'juxtapose' and 'angular' come into play, making more of a challenge.

Sessions: Each browser creates its own session, which should keep you from seeing any repeats of pictures within a given View.

The dog's eyes: My goal is to make the site smart enough so that it seems alive, like the feeling you get when looking into a dog's eyes. The fading image when you enter the slideshow is a gesture toward that goal. More concretely, a live molecular dynamics simulation is used as a sort of heart, which is affected by clicks on the slideshow page, and in turn affects the next picture you see, and which also gives a continuing life to the site.

Can you make it browsable? I don't plan to add any kind of browsability like other excellent sites have. Can we upload pictures? I plan to add the ability to upload photos to the site.

Theory: A picture can tell a story that stands on its own and burns itself into your memory. Put two pictures together in sequence, and the 'picture' now exists in your memory as much as in your eye. The story becomes what is common to the pictures, and this competes for your attention with the other details. You may struggle to find a story and give up. My theory is that if you can find a story more often, you will become more engaged. According to a New York Times blog:

Japanese researchers found that dogs who trained a long gaze on their owners had elevated levels of oxytocin, a hormone produced in the brain that is associated with nurturing and attachment, similar to the feel-good feedback that bolsters bonding between parent and child. After receiving those long gazes, the owners' levels of oxytocin increased, too.

A more nuanced story about oxytocin from Wikipedia.

Related media and software

  • The No Words Forum threads photos on themes like Phobrain, but without a dynamic personality responding in the moment. Very interesting for the variety of viewpoints.
  • uses deep learning to hybridize pairs of pictures, creating novel effects analogous to combining Phobrain pairs in your mind.
  • Google Images allows you to search with words or pictures, and in principle Phobrain could use it for raw associations for its personality to select from (similarly for photo stock agency collections).
  • New deep learning image retrieval methods like Google's could be retrained with Phobrain principles, rather than simply used to feed Phobrain.



  • P. Kainz, M. Mayrhofer-Reinhartshuber, and H. Ahammer. IQM: An extensible and portable open source application for image and signal analysis in Java. PLoS ONE, 10(1):e0116329, Jan. 2015.
  • Is a two-dimensional generalization of the Higuchi algorithm really necessary? Helmut Ahammer, Nikolaus Sabathiel, and Martin A. Reiss, Chaos 25, 073104 (2015): doi: 10.1063/1.4923030
  • BoofCV, Peter Abeles, 2012. An open source Java library for real-time computer vision and robotics applications.
  • Web 3DNA for DNA model building
  • AMBER: Assisted Model Building with Energy Refinement, D.A. Case, R.M. Betz, W. Botello-Smith, D.S. Cerutti, T.E. Cheatham, III, T.A. Darden, R.E. Duke, T.J. Giese, H. Gohlke, A.W. Goetz, N. Homeyer, S. Izadi, P. Janowski, J. Kaus, A. Kovalenko, T.S. Lee, S. LeGrand, P. Li, C. Lin, T. Luchko, R. Luo, B. Madej, D. Mermelstein, K.M. Merz, G. Monard, H. Nguyen, H.T. Nguyen, I. Omelyan, A. Onufriev, D.R. Roe, A. Roitberg, C. Sagui, C.L. Simmerling, J. Swails, R.C. Walker, J. Wang, R.M. Wolf, X. Wu, L. Xiao, and P.A. Kollman (2016), AMBER 2016, University of California, San Francisco.
  • ParmBSC1 DNA Force Field Pérez, Alberto, Marchán Ivan, Svozil Daniel, Sponer Jiri, Cheatham Thomas E., Laughton Charles A., and Orozco Modesto. Refinement of the AMBER force field for nucleic acids: improving the description of alpha/gamma conformers. Biophys J. (2007) 92 (11), 3817-29.
  • NGL, a WebGL protein viewer. NGL Viewer: a web application for molecular visualization, Oxford Journals, 2015.
  • Modeling the shape of the scene: a holistic representation of the spatial envelope, Aude Oliva, Antonio Torralba, International Journal of Computer Vision, Vol. 42(3): 145-175, 2001. link
  • Jonathon S. Hare, Sina Samangooei, and David P. Dupplaw. 2011. OpenIMAJ and ImageTerrier: Java libraries and tools for scalable multimedia analysis and indexing of images. In Proceedings of the 19th ACM international conference on Multimedia (MM '11). ACM, New York, NY, USA, 691-694. DOI=10.1145/2072298.2072421 (


  • Ivan Karp, owner of OK Harris gallery, once told me, "What you have here is fine art photography." Memorial, with great remembrances to put that in context.

Site History

  • Gallery
  • What's it all about, Alfie? Discussion of the meaning of it all on
  • 4/2017 Created Browse Mode for the pairs views, chaining curated pairs by keywords, with the pair-forming options now available under Search Mode.
    Added 1700 more photos by Raf & Skot.
  • 3/2017 Added a free Pair Workbench page for loading your own photos from disk, and from web sites that allow it (e.g. imgur). Scales them to match/fit, and lets you toggle with previous photos/pairs.
  • 2/2017 Converted View to switch between 4 tilings of one or two photos, consolidating earlier work and adding horizontal and stacked landscape tilings.
  • 1/2017 Added 'c'=curated pair option to pairs page, for manually-selected top 15% of over 25K pairs examined.
    Added a new archive by photographers Raf & Skot, with 1500 photos.
  • 12/2016 Added pairs page, with color-match and color-opposite functions.
  • 10/2016 Added exploration when drawing on the photo: the line you draw maps through color space to the next photo, based on averaged colors.
    Added 1700 more of Bill's photos, now caught up.
  • 9/2016 Added click-to-toggle region alongside picture to see previous photo.
    Added 1500 more of Bill's photos. Added 200 of Ellen's photos.
  • 8/2016 Revised keyword algorithm: postponed use of geometrical keywords like 'juxtapose' and 'angular' until 100 photos have been seen.
  • 7/2016 Unified keyword coding schemes and revised keywords.
  • 6/2016 Clicks on different zones of the picture invoke different image matching algorithms, analogous to touching a face.
  • 5/2016 A live DNA molecular dynamics simulation interacts with picture selection, acting as a beating heart for the site. The moving molecule.
  • 4/2016 Added 1400 of Elle's pictures. User mouse behavior now influences picture selection.
  • 1/2016 Elle classified the photos according to her own scheme.
  • 10/2015 Site (single-photo) launched with 6500 of Bill's photos, keywords, color analysis, and - | + .
  • 6/2015 Laptop ordered.
  • Quotations for everyday use


<——— oOo ———>
Listen, a woman with a bulldozer built this house of now
Carving away the mountain, whose name is your childhood home
We were trying to buy it, buy it, buy it, someone was found killed
There all bones, bones, dry bones

Earth water fire and air
Met together in a garden fair
Put in a basket bound with skin
If you answer this riddle
If you answer this riddle, you'll never begin

— Robin Williamson, Koeeoaddi There

In tribute to Lucy Reynolds, teacher of Graham technique and breeder of dogs.

© 2015,2016,2017 Photoriot.