This nOde last updated November 24th, 2001 and is permanently morphing...
(2 Men (Eagle) - 13 Ceh (Red) - 188.8.131.52.15)
A family friend who was a patent attorney for Shell Oil, helped Patrick submit a patent application. The patent examiners thought that this was just sound transfer through bone conduction and refused to issue a patent for 12 years. In a rare meeting in 1970, the patent office agreed to examine the Neurophone for themselves and meet Patrick and his attorney. They both encountered a surprise.
The examiner had a deaf employee attend the meeting to test the device. The man was totally nerve deaf in one ear and almost totally deaf in the other. Patrick showed him how to use the Neurophone and played a record of the famous Maria Callas singing an opera. As he was able to hear the undistorted beauty of her voice, the tears of joy streamed down his face. When we listen to music or human speech through the Neurophone we are hearing sounds through two distinct channels. One channel is heard normally by the ears by means of the cochlea and the other channel is sent through the skin and/or bone to the saccule. One can easily tell the difference between the two modes of hearing by plugging up the ears while listening to the Neurophone.
If the NeurophoneTM electrodes are connected to an ordinary audio amplifier, some sounds may be heard but not as distinctly as when the crystals are connected to the Neurophone.
This hearing is conducted through bone conduction to the cochlea because the 40 kHz ultrasonic carrier wave needed to activate the saccule is missing.
When the Neurophone crystals are connected to the
Neurophone, the ultrasonic carrier wave bypasses the cochlea and activates
hearing channels in the saccule.
In the Dolphin Project we
developed the basis for many potential new technologies. We were able to
ascertain the encoding mechanism used by the human brain to decode speech
intelligence patterns, and were also able to decode the mechanism used
by the brain to locate sound sources in three dimensional
space. These discoveries led to the development of a 3-D holographic sound
system which could place sounds in any location in space as perceived
by the listener. In other words it would be sent in a way where the sound
appeared to be coming right out of thin air! The human ear is limited to
about 16,000 Hertz (vibrations, pulses
or cycles per second) while dolphins generate and hear sounds up to 250,000
Hertz. Our special Neurophone enabled us to hear the full range of dolphin
When our digital Neurophone patent application was sent to the patent office, the Defense Intelligence Agency slapped it under a secrecy order. I was unable to work on the device or talk about it to anyone for another five years. This was terribly discouraging. The first patent took twelve years to obtain and now, after all of our work, we had our work locked up in a national security order. The digital Neurophone converts sound waves into a digital signal that matches the time ratios codes understood by the human brain. These time signals are used not only in speech recognition but also in recognizing the location of sounds in 3-D space ...
The digital Neurophone is the version that we eventually produced and sold as the Mark XI and the Thinkman Model 50 versions. These Neurophones were especially useful as speed learning machines.
The first Neurophone device
was constructed by attaching two Brillo pads to insulated copper wires.
The wires from the pads were connected to a reversed audio output transformer
that was attached to a hi-fi amplifier. The output voltage of the audio
transformer was about 1,500 volts peak-to-peak. While listening to the
sound, the signal was perceived as being loudest and clearest when the
amplifier was over-driven and square waves were generated. At the same
time, the transformer would ring or oscillate
with a dampened wave form at frequencies
of 40-50 kHz.
The next Neurophone consisted of a variable frequency vacuum tube oscillator that was amplitude-modulated. This output signal was then fed into a high frequency transformer that was flat in frequency response in the 20-100kHz range. The electrodes were placed on the head and the oscillator was tuned so that maximum resonance was obtained using the human body as a part of the tank circuit. Later models had a feedback mechanism that automatically adjusted the frequency for resonance. We found that the dielectric constant of human skin is highly variable. In order to achieve maximum transfer of energy, the unit had to be returned to resonance in order to match the 'dynamic dielectric response' of the body of the listener. The 2,000 volt peak-to-peak amplitude-modulated carrier wave was then connected to the body by means of two-inch diameter electrode disks that were insulated by means of my ar films of different thicknesses. The Neurophone is really a scalar wave device since the out-of-phase signals from the electrodes mix in the non-linear complexities of the skin dielectric.
The signals from each capacitor electrode are 180 degrees out-of-phase. Each signal is transmitted into the complex dielectric of the body where phase cancellation takes place. The net result is a scalar vector. This fact was not known at the time l invented the device. This knowledge came later when we learned that the human nervous system is particularly sensitive to scalar signals. The high frequency amplitude-modulated Neurophone has excellent sound clarity. The perceived signal was very clearly perceived as coming from within the head. We established quite early that some totally nerve-deaf people could hear with the device. For some reason, however, not all nerve-deaf people hear with it the first time.
HOW DOES IT WORK?
The skin is our largest and most complex organ. In addition to being the first line of defense against infection, the skin is a gigantic liquid crystal brain. The skin is piezo-electric. When it is vibrated or rubbed, it generates electric signals and scalar waves. Every organ of perception evolved from the skin. When we are embryos, our sensory organs evolved from the folds in the skin. Many primitive organisms and animals can see and hear with their skin. We now know that the skin transmits ultrasonic impulses to an organ in the inner ear known as the Saccule. The skin vibrates in resonance with the ultrasonic ( 40 KHz) Neurophone modulated carrier wave and transmits the sound from the carrier through multiple channels into the brain. When the Neurophone was originally developed, neurophysiologists considered that the brain was hard-wired and that the various cranial nerves were hard-wired to every sensory system. The eighth cranial nerve is the nerve bundle that runs from the inner ear to the brain. Theoretically, we should only be able to hear with our ears if our sensor organs are hardwired.
Now the concept of a holographic brain has come into being. The holographic brain theory states that the brain uses a holographic encoding system so that the entire brain may be able to function as a multi-faceted sensory encoding computer. This means that sensory impressions, like hearing, may be encoded so that any part of the brain can recognize input signals according to a special type of signal coding. Theoretically, we should be able to see and hear through multiple channels not just our eyes and ears.
The key to the Neurophone is the stimulation of the nerves of the skin with a digitally coded signal that carries the same time-ratio code that is recognized as sound by any nerve in the body.
All commercial digital speech
recognition circuitry is based on so-called dominant frequency
power analysis. While speech can be recognized by such a circuit, the truth
is that speech encoding is based on time ratios. If the frequency power
analysis circuits are not phased correctly, they will not work. The intelligence
(sound) is carried by phase information.
The frequency content of the voice gives our voice a certain quality, but frequency does not containinformation. All attempts at computer voice recognition and voice generation are only partially successful. Until digital time-ratio encoding is used, our computers will never be able to really talk to us.
The computer that we developed to recognize speech for the Man-Dolphin communicator used time-ratio analysis only. By recognizing and using time-ratio encoding, we could transmit clear voice data through extremely narrow bandwidths. In one device, we developed a radio transmitter that had a bandwidth of only 300 Hertz while maintaining crystal clear transmission. Since signal-to-noise ratio is based on bandwidth considerations, we were able to transmit clear voice over thousands of miles while using milliwatt power.
Improved signal-processing algorithms are the basis of a new series of Neurophones that are currently under development. These new Neurophones use state-of-the-art digital processing to render sound information with much greater clarity.
The Neurophone is an electronic telepathy machine. Several tests prove that it bypasses the eighth cranial nerve, the hearing nerve, and transmits sound directly to the brain. This means that the Neurophone stimulates perception through a seventh or alternative sense.
All hearing aids stimulate tiny bones in the middle ear.
Sometimes when the eardrum is damaged, the bones of the inner ear are stimulated by a vibrator that is placed behind the ear on the base of the skull. Bone conduction will even work through the teeth. In order for bone conduction to work, the cochlea or inner ear that connects to the eighth cranial nerve first must function. People who are nerve-deaf cannot hear through bone conduction because the nerves in the inner ear are not functional.
A number of profoundly nerve-deaf people and people who have had the entire inner ear removed by surgery have been able to hear with the Neurophone. If the Neurophone electrodes are placed on the closed eyes or on the face, the sound can be clearly 'heard' as if it were coming from inside the brain. When the electrodes are placed on the face, the sound is perceived through the trigeminal nerve. We therefore know that the Neurophone can work through the trigeminal or facial nerve. When the facial nerve is deadened by means of anesthetic injections, we can no longer hear through the face. In these cases, there is a fine line where the skin on the face is numb. If the electrodes are placed on the numb skin, we cannot hear it but when the electrodes are moved a fraction of an inch over to skin that still has feeling, sound perception is restored and the person can 'hear'.
This proves that the means of sound perception via the Neurophone is by means of skin and not by means of bone conduction. There was an earlier test performed at Tufts University that was designed by Dr. Dwight Wayne Batteau, one of my partners in the United States Navy Dolphin Communication Project. This test was known as the "Beat Frequency Test". It is well known that sound waves of two slightly different frequencies create a 'beat' note as the waves interfere with each other. For example, if a sound of 300 Hertz and one of 330 Hertz are played into one ear at the same time a beat not of 30 Hertz will be perceived. This is a mechanical summation of sound in the bone structure of the inner ear. There is another beat, sounds beat together in the corpus callosum in the center of the brain. This binaural beat is used by the Monroe Institute and others to simulate altered brain states by entraining (causing brain waves to lock on and follow the signal) the brain into high alpha or even theta brain states.
These brain states are associated with creativity, lucid dreaming and other states of consciousness otherwise difficult to reach when awake. The Neurophone is a powerful brain entrainment device. If we play alpha or theta signals directly through the Neurophone, we can move the brain into any state desired. Batteau's theory was that if we could place the Neurophone electrodes so that the sound was perceived as coming from one side of the head only, and if we played a 300 Hertz signal through the Neurophone, if we also played a 330 Hertz signal through an ordinary headphone we would get a beat note if the signals were summing in the inner ear bones. When the test was conducted, we were able to perceive two distinct tones without beat. This test again proved that Neurophonic hearing was not through bone conduction. When we used a stereo Neurophone, we were able to get a beat note that is similar to the binaural beat, but the beat is occurring inside the nervous system and is not the result of bone conduction. The Neurophone is a 'gateway' into altered brain states. Its most powerful use may be in direct communications with the brain centers, thereby bypassing the 'filters' or inner mechanisms that may limit our ability to communicate to the brain. If we can unlock the secret of direct audio communications to the brain, we can unlock the secret of visual communications. The skin has receptors that can detect vibration, light, temperature, pressure and friction. All we have to do is stimulate the skin with the right signals. We are continuing Neurophonic research. We have recently developed other modes of Neurophonic transmission. We have also reversed the Neurophone and found that we can detect scalar waves that are generated by the living system. The detection technique is actually very similar to the process used by Dr. Hiroshi Motoyama in Japan. Dr. Motoyama used capacitor electrodes very much like those we use with the Neurophone to detect energies from various power centers of the body known as chakras.
Now a common sense lecture from Miki is in order here to the Sci Fi Idiots out there who seem to think that reality is a massive hallucination that they can personally change for everybody with their inate stupidity and total lack of common sense. First of all don't go over the edge on the potential of manipulating people or their brains with anything, even a silent suggestion that goes to the subconscious. While the criminal retards have proved that people who don't have any awareness of the silent suggestions are much more vulnerable to them than people who can be aware of hearing it and can argue with it, you cannot change people or manipulate them to the degree that retards wish they could. Secondly their is the panic factor of stupids who picture the possibility of some electronic device reading their minds - HAH! No Way! How many ways does it have to be explained to you dipsy doodles that your brain does not mimic a computer and send coded language with alphabetized words out into thin air where a computer or electronic device can pick it up with infra-red or EEG or whatever. Do you know how a computer descerns to put a particular letter of the alphabet or a digit into its hard drive memory?
Are you stupid enough to think that in inventing computers and computer codes that man has stumbled on the brain code that every race and culture of human uses to think or talk with. Get off it!
Electronic devices supposedly have been developed since computers were developed that will read the information from a computer monitor screen or from the information that is coded and being transfered through the phone lines or radio waves from mobile phones. BUT they don't have the ability to point one of those devices at a hard drive disc or a floppy and read what is on it with a TEMPEST infra-red or a laser beam or anything else.
That in spite of it being electromagnetic. And radio waves can carry electromagnetic codes. Hey! How about sticking to what is real and really possible and leave the wishes to wishing wells and fantasies toScience Fiction Writers.
A more scientific explaination
of the neurophone and how it works is located online at