Updated 14 August 2015, version 1.5
When you studied NLP, did you learn about “the” kinesthetic modality?
The standard NLP model lumps all “feelings” together as one kinesthetic modality, with one set of accessing cues. These cues include belly breathing, slow speech, use of kinesthetic words and phases (such as “touching base,” “off-balance,” and “warm”), and eye accesses to the (usually) lower right.
This model is simple and easy to learn and use. It’s also obviously inaccurate. Dizziness is not the same kind of “feeling” as happiness, hunger, or warm velvet rubbing across your skin.
That wouldn’t matter to NLPers if kinesthetics all functioned identically when communicating or doing change work. But in fact, subtle distinctions between kinds of kinesthetics often determine whether an intervention will work for a particular person.
Unpacking the kinesthetic modality to discover its components and distinctions was one of the first modeling projects I was involved with, in 2003. This proved so fruitful that as I write this in 2009, my research buddy Jan “yon” Saeger and I are both still exploring kinesthetics. Our findings match neurobiology’s model fairly closely, so let’s start there.
A neurobiological model of kinesthetics
In his 2003 book The Feeling of What Happens: Body and Emotion in the Making of Conscious, neuroscientist Antonio Damasio describes the “somatosensory” modality, equivalent to NLP’s kinesthetic modality:
“[It consists of] several subsystems, each of which conveys signals to the brain about the state of very different aspects of the body. It is apparent that these different signaling systems surfaced at different points in evolution. They use different machinery in terms of the nerve fibers that carry the signals from the body to the central nervous system, and they are also different in the number, type, and position of the central nervous system relays onto which they map their signals. In fact, one aspect of somatosensory signaling does not use neurons at all but rather chemical substances available in the bloodstream.”
Damasio divides the somatosensory system into 3 categories, which sense:
1. Internal milieu. These systems mostly describe the chemical environment of cells. Aimed primarily at the sensing of internal states, they include:
- Chemicals in the bloodstream.
- Nociception. These sensors detect actual and potential damage, including extreme temperatures, impact, cuts, chemicals such as acids and hot peppers, and inflammation. Signals from nociceptors may or may not be perceived as pain.
- State of smooth muscles, including the viscera and blood vessels.
2. Body position. These systems describe both internal states and external relationships with the world, and include:
- Proprioception, which senses body position.
- The vestibular (balance) system.
3. Fine touch. Uses a variety of skin sensors to detect texture, form, weight, temperature, and so on.
Modeling K with NLP
NLP explores the subjective aspects of experience. So the model Jan and I developed includes some distinctions Damasio’s model lacks, including constructed representations and synesthesias. Our model also lacks specificity regarding nerves and neural processing. Remember, at the perceptual level where NLP works, we are dealing almost entirely with high-level constructed and modified sensory representations, not raw sensory input.
Also, our model is designed to be useful for doing change work. Accuracy is only a minor criteria. Jan and I developed distinctions that we found useful in helping people change, troubleshooting change processes when they didn’t work, and developing workable new NLP techniques. We don’t claim that our distinctions are true, only that they’re useful.
And although I modeled kinesthetics while working with Jan, in this article I present my model of our results.
Perceptual and meta-kinesthetics
NLP distinguishes 2 major classes of kinesthetics that neurobiology doesn’t:
- Perceptual kinesthetics that register temperature, pressure, movement, body position, etc. These are perceptions OF events.
- Meta-kinesthetics that evaluate what gets perceived. These are perceptions ABOUT events.
For instance, you might have very different meta-kinesthetic feelings about a light touch on your back depending on whether you think you are being touched by someone you dislike, or by a loved one, or think a bee landed on you!
Because perceptual kinesthetics and meta-kinesthetics are both perceived as “feelings,” people sometimes have difficulty discerning which is which. People sense perceptual kinesthetics all over and within the body, particularly on the skin. People experience meta-kinesthetics primarily in the torso, particularly along the midline. These sensations are often within the body.
As you’ll see below, the brain uses several of the same kinesthetic senses to represent perceptual and meta-kinesthetics.
Kinesthetic sensory modalities
Here are the basic kinesthetic sensory modalities as Jan and I modeled them. (Several other NLP modelers have developed similar lists.)
Tactile K is the “sense of touch.” It senses texture, pressure, and temperature.
Vestibular K is motion and balance sensation from the inner ear.
Visceral K is literally “gut feelings.” (Viscera are internal organs such as the stomach and intestines.) Visceral K generates much of the feeling component of emotions. Since much of what people experience as emotions is chemical (such as endorphins in the brain), aspects of this modality processes slowly — at chemical rather than nerve speeds.
Visceral K also signals hunger, fullness, and tummy upsets. Some individuals have a difficult time distinguishing these physical signals from emotional ones.
Somatosensory K senses internal body states such as blood chemistry and blood pressure. Many of its “readouts” are not directly and easily accessible to consciousness.
Proprioceptive K senses body position and spatial relations — for instance, the spatial relationship between your left little finger, right knee, and right earlobe.
Each of these modalities has its own eye access cue. In most people they are grouped in the kinesthetic eye access area (lower right for most right-handed people). However, a few people we tested had the eye access for one kinesthetic modality outside the usual K area. The arrangement of the individual K access cues varies between individuals, so it’s not surprising that these small (and confusing) distinctions weren’t included in NLP’s eye access cue model.
(We have not yet modeled other kinesthetic accessing cues, such as breathing patterns, to determine whether they also vary by kinesthetic modality.)
Just like visual and auditory, each kinesthetic modality can be used for sensory, remembered, and constructed perceptions. Constructed proprioceptive K is especially important, as people use proprioception to encode spatial information both inside and outside the body.
Unusual kinesthetic senses
The kinesthetic senses listed above are fairly straightforward: what you perceive (the map) has fairly close correspondence with the phenomenon you’re sensing (the territory). That doesn’t seem to be the case with another sense we modeled:
Background K is a subtle feeling that occurs everywhere in the body, in 3D, like water filling a jar. (In contrast, tactile K occurs mostly at the body surface. And while proprioceptive K maps spatial relations between body parts, it doesn’t necessarily provide a sense of volume.)
Background K is most easily noticed in the fingertips and feet as a slight feeling of vibration, tingling, electricity, or energy. This feeling is used by some hypnotists as a convincer that the subject is in trance. Actually, it is always present, just unnoticed.
Jan and I think background K exists to provide the brain with an ongoing kinesthetic perception of where the body and limbs are. This perception cannot depend on movement to generate a signal. (An animal that had to fidget to figure out where its body parts were would be easy prey.) Background K probably isn’t reporting actual sensations in the body. Instead, we think it’s a constructed map generated from the input from multiple senses, including proprioception, tactile K (pressure sensing of weight), and perhaps vestibular K.
To get a sense of the difference between proprioceptive and background kinesthetics, try this experiment with your eyes closed: First, move your arm here and there, noticing the many kinesthetic perceptions that let you know exactly where it is. (Take a moment to do that now, before you read the next step.)
Done that? Good…
Now notice some part of your body you haven’t moved for a while. How do you know where it is? Compare the feeling in that area to the kinesthetics of moving your arm, and notice the differences.
Perhaps because it’s a construct, background K seems especially prone to conscious manipulation. The perceived sensation can be intensified, altered, moved, and even projected beyond the body. It can feel like energy or electricity flowing through the body, even outside it. Sensations that involve background K are called energy, prana, chi, and a variety of other names by practitioners of tantra, reiki, martial arts, etc. Mystic Eckhart Tolle calls background K “aliveness.”
To deal with troubling emotions, Richard Bandler developed an intervention that utilizes another important K sensation, spin. To get an experience of spin, follow Richard’s process using any unpleasant emotion:
Have the client close their eyes, then ask them to gesture with a finger or hand to indicate answers to your questions. Now say:
- “Where in your body does the feeling start?”
- “Where does the feeling move to?” (This gives you the path of the spin.)
- “As it moves along this path, does the feeling move in a clockwise or counterclockwise direction?”
- “Reverse the direction of the spin as the emotion travels the same path… and tell me what you notice that is different when you do that.”
Once the spin reverses, the troublesome emotion reduces greatly in intensity, sometimes to zero. In his article “Some Great New Methods,” Nick Kemp writes “I have used this process successfully with a large number of clients with anxiety, panic attacks and phobias, and also with OCD, anger, jealousy, PTSD, and other very intense feelings. I have not yet found an example of an intense feeling that it did not work with.”
You can also intensify a pleasant emotion by spinning it faster in the same direction.
And you can change the axis of spin. If the emotion moves bottom to top, so its spin axis is vertical, try changing the axis of spin so it is horizontal side-to-side or front-to-back, or diagonal. (I keep it spinning along its original path, but you could try changing the path also.) In my experience there is usually a combination of axis orientation and spin direction that produces the biggest change.
Now that you’ve experienced spin yourself, take a moment to notice how weird it is:
- You feel a sensation…
- of something that’s not you, that isn’t part of your body…
- doing something you have never experienced. (None of your body parts spin on an axis.)
I know of no analog in real experience. Spin is apparently a meta-kinesthetic sensation the human brain generates in order to encode metadata such as emotional intensity.
Rather than being a separate modality, I suspect that spin is a submodality of background K. Spin may incorporate some proprioception also, which would make it a synesthesia.
Some “senses” are synesthesias
Curiously, several important K sensations are synesthesias built from the input of several senses. Just as the brain compares the 2D pictures from each eye to generate 3D information about depth perception, it compares data from different senses. This allows it to derive information that isn’t available from only one data set.
For instance, a moving dark area might be a shadow or optical illusion. But if there’s a simultaneous sound from the same direction, the image is likely to be something real and significant, such as a person or animal.
Perceptions based on several senses are much likelier to be robust, and to keep working even when one or more channels of sensory input get distorted or interrupted.
Balance involves input from:
- the vestibular system of your inner ear,
- pressure and position sensors in your body (such as your weight on your butt or feet as you read this), and
- your visual system.
Disrupt any one of those component systems (for instance by tilting your head or closing your eyes), and you can still keep your balance. Disrupt two components (for instance by closing your eyes and tilting your head) and balancing becomes much harder. Balance is the summary of multiple sensory inputs, not one sense alone.
Emotions are meta-kinesthetic synesthesias of visceral K (the main “feeling” of feelings) and spin K, combined with other representations such as sounds and pictures that give those feelings meaning. Meaning-making representations may be the main difference between physiologically similar states such as fear and excitement.
Both emotions and motivations often involve an additional balance K component. People feel “centered” or “off-balance,” pulled or pushed toward or away from their experiences and representations.
Efforting is the subjective sense of how much real or anticipated effort it does take or will take to do something. Like emotion, efforting is a meta-kinesthetic evaluation rather than a perceptual kinesthetic experience.
Efforting evaluations may be wildly inaccurate. Efforting perceived as unrealistically low can prompt people to take risks and over-commit to projects that may never get completed. Efforting perceived as unrealistically high can keep people from attempting to achieve doable outcomes and goals.
Motor output is the sense of doing. It detects both skeletal muscle activity (what people usually think of as “doing” or “action”) and smooth muscle (visceral) activity (emotional K).
That motor output detects both types of K is important because many strategies use motor output as their completion criteria. If the needed motor output is action, but the client’s strategy stops after smooth muscle “action” (emotion), they don’t get their outcome. People with this problem may run strong emotional states around activities and goals on which they don’t take action. The opposite problem is strategies that output skeletal muscle activity (such as breaking objects, hitting people, or connecting sexually) when visceral motor output that generates an emotion would be more appropriate.
Shape and texture recognition. Recognizing shapes by touch relies on both tactile cues (touching the surface) and proprioceptive cues (such as how widely you must spread your fingers or arms to reach around the object).
- If you can’t see the object you’re touching, your brain will typically use kinesthetic cues to construct at least a vague visual representation of it. This helps you visually recognize the item if you see it later.
- If you can see the object, your brain integrates what you feel with what you see, and uses each sense to cross-check the other.
- If you can only see the object, but can’t touch it, your brain fabricates a kinesthetic representation of it, including its shape, spatial qualities, and tactile texture. This kinesthetic map of the object facilitates interacting with it: touching it, picking it up, walking around it, and recognizing it by feel, even if you’ve never touched it before. This mechanism lets you look at velvet, and know that it feels soft.
- The brain also uses the mechanisms described in #3 to build kinesthetic maps of space around you, using visual and auditory cues. (You don’t have to touch the ceiling to have a kinesthetic map of its location and texture.)
Vision. Proprioception is so good at representing spatial relations that the brain even uses it to represent location in the visual system!
Human vision is excellent at accurately representing direction. But its depth perception isn’t precise except at close distances (where the parallax between the two eyes is significant) or when objects are moving in directions where their motion creates visual parallax over time.
Visual depth perception easily gets confused by objects that are moving in relation to each other, are spatially close to each other, or provide visual depth cues that conflict with their actual location. That’s why it’s so easy for humans to perceive 3D depth in a 2D painting or photograph.
Nature’s workaround is to use vision to detect motion and provide some directional information, while proprioception encodes exact position and spatial relations. Proprioception is very good at that, since its original function was dynamically mapping body position in 3 dimensions so that our ancestors wouldn’t trip over their own legs or fall out of a tree.
Even when you and your surroundings are moving in relation to each other, proprioception can maintain a relatively accurate 3D representation in circumstances where vision gets confused. This is partly because the proprioceptive map for the visual system gets built using cues from hearing and various K modalities including balance, in addition to visual cues.
Constructed proprioceptive representations used by the visual system extend well beyond the body — often far beyond the current visual field.
I suspect that pain — the subjective experience of physical suffering — also belongs on this list, as it seems to have a significant emotional component. However, we have not yet modeled it.
Implications of multiple K modalities
When NLPers regard “kinesthetics” as multiple modalities, new distinctions become possible:
Associated and disassociated K
In working with athletes, primarily child gymnasts, NLPer Michael Harris discovered that people differ in which aspects of kinesthetics they associate into. Top gymnasts tend to stay associated in their bodies using proprioceptive K, so they are acutely aware of body position and what they’re doing. They’re also disassociated from their emotions, so they don’t get upset when they make a mistake or a coach berates them. Someone doing a lot of emotional drama is probably associated into emotional K, but might be disassociated from proprioceptive K and thus ineffective at taking action.
Associating into and disassociating from different types of K can be helpful in dealing with troublesome memories. For instance, you can have a client disassociate from unpleasant emotional K, while staying associated into other types of K to preserve valuable sensory data about their experience.
Kinesthetic processing speed
Whether a “kinesthetic” person speaks, decides, and acts slowly or quickly depends on which types of K they use to do their internal processing. Someone who is relies heavily on their emotions and “gut feelings” to process abstractions is likely to think and talk slowly, because visceral and emotional K are slow. (Chemical changes propagate much more slowly than nerve signals.) Someone who relies on proprioceptive K may think and talk faster than even “visual” people, because proprioception is so good at fast simultaneous calculation.
I haven’t yet tested mapping across strategies from one kinesthetic modality to another, such as from slow emotional processing to fast proprioceptive processing. I suspect the results will be interesting. (If you try this, please post your results in the comments at the bottom of this page.)
Humans use location to encode data about meaning. That’s why so many NLP models, distinctions, and processes utilize location. Examples include timelines, social panoramas, eye access cues, the visual Swish, the Fast Phobia Cure, and submodality changes such as moving a troubling voice or picture farther away.
Location can be represented many ways, including visually, auditorily, and via tactile and/or proprioceptive kinesthetics. Sometimes NLP processes don’t work because the client understands the instructions to refer to a different modality of spatial data than the NLPer intends. Seeing where something is may have a different effect than feeling where it is. If you want the client to pay attention to kinesthetic representations, say so.
When a client moves a visual or auditory representation, its kinesthetics usually move by implication… but not always. If proprioceptive K doesn’t change location along with the rest of the representation, usually the change doesn’t have much effect or doesn’t last. Often the representation will snap back to its previous location. Having the client physically grab and move the representation is often sufficient to solve this problem.
Get specific about kinesthetics
Specifying which kinesthetic modalities you want clients to use, and how to use them, can get you much better results when doing NLP.
When you give specific instructions, you are more likely to get the results you intend. Jan and I have both found that when NLP processes don’t work, it’s frequently because the explorer is using the wrong type of kinesthetics. If all you specify is “feelings,” some people will access emotional K, others will access tactile K, and others will access proprioception.
Often an NLP practitioner assumes that the client is accessing the same kind of kinesthetics the practitioner intends, but the client does something else. Changing emotional K won’t necessarily do anything useful to a balance K representation, or vice versa. Use K modality distinctions to guide clients into doing precisely what’s needed.
Using kinesthetic distinctions, you can also:
Stack kinesthetic submodalities to trigger association. When you ask someone “Where is the center of that feeling in your body?” or “As you’re in that experience, what is the temperature?”, they have to associate at least a little bit to answer your question. By specifying submodalities from several kinds of K, you are much more likely to generate a strong association than if you only specify “feelings.”
Stack K modalities while working with mental constructs. When you have a client physically grab a representation and move it from one place to another, they activate their proprioceptive, balance, and motor K systems, and possibly tactile K. This is much more likely to get a change than merely having them represent the location change visually or auditorily.
You’re also more likely to produce powerful changes if you treat the client’s mental representations as real objects. I look at, point toward, and talk to the spatial location of a client’s representations. I step around or over them as I work. I grab client representations and move them around, showing obvious effort as I do so. This covertly reinforces the idea that the client’s representations are real rather than imaginary.
It also activates the client’s mirror neuron system. When you shove someone’s representations around, their mirror neurons generate the experience of them physically moving their representations. When you activate several of your kinesthetic modalities while interacting with a client’s mental constructs, the client is much more likely to do the same. This makes them more likely to get powerful, lasting changes.
Use K distinctions to shift meaning. Emotions, balance, effort, spin, and location: the brain uses kinesthetics to encode many kinds of information about experiences and sensations, and to give experiences meaning. Thus lot of NLP processes explicitly or implicitly use metadata encoded in the K modalities to change meaning.
Once you understand multiple K modalities, you can figure out what kind of K metadata an intervention is intended to change, then use specific language and actions to make that change easier for the client. You can also add in other K modalities to compound and strengthen the intervention’s effect.
NLP’s simple model of kinesthetics as one sensory system is easy to learn and use, but has significant limitations. By using a model with more distinctions, you can increase your NLP effectiveness and make change easier for your clients.