On November 2nd, 2017, the Debunker Club sponsored a one-hour Twitter debate using the hashtag #DebunkDebate. We have a wonderful, cacophonous dialog in typical Twitter-chat fashion.
The file below contains all the tweets from the debate.
Download Great 70-20-10 Debate Tweet Stream
Also, Cara North posted a prettier version here.
Let it be known… LOL… That our comments have been down a while, but are now back up as of August 12, 2018. Yeah!
Here is a nice post that debunks personality tests.
Has anyone run across or have experience with David Kolb’s Experiential Learning Theroy and the Learning Style Inventory? I’ve recently run across a LinkedIn post where someone is advertising a class on how to implement it. I did a quick search on Kolb and he’s written quite a few books and articles. Most of his work was done in the early 80s, but I’m also seeing current research being done using both the Experiential Learning Cycles and the Learning Style Inventory. I found an article where he replies to Freedman and Stumpf who apparently criticized his Learning Styles Inventory. I am not able to share it here. Full disclosure, I haven’t read all of it, nor have I been able to dig too deeply into it, which is why I’m bringing the question here. Thanks.
I noticed this eminent Ph.D. and Assistant Dean for Education Innovation providing insight on how to apply Dale’s Cone in learning: http://www.queensu.ca/teachingandlearning/modules/active/documents/Dales_Cone_of_Experience_summary.pdf
What suddenly struck me was how at odds this “research” is with that of learning styles. If learning styles are a thing, then shouldn’t the percentages vary at each level of the cone depending on the individual’s learning style?
I’m wondering why I’ve never seen any debate about this inherent contradiction from proponents.
Reading an article in Inside Higher Ed in which our campus was featured for its initiatives to improve accessibility for our online content, I saw this little gem:
…”accessibility efforts …can even serve students who don’t have learning disabilities but learn better from reading text than hearing it out loud…”.
I see this all the time. Two problems with it. One, it so obviously assumes that learning styles are legitimate AND that the reader knows that and agrees with it. The term “learning styles” isn’t even used and doesn’t have to be. What’s not said is just as important as what is.
And two, used in the context of accessibility, this kind of statement means precious resources of time and effort are being taken from REALLY making content accessible to PRETENDING to by addressing learning styles.
This is why addressing myths about why people learn is so important. Real people with real learning problems don’t get helped when we mess around with junk science.
Robert Bacal posted this article on LinkedIn, which he found on Medium. It interestingly shows that commercial interests and motivations might have propelled the creation of the Myers-Briggs Type Indicator.
Here is a nice article on personality tests like the MBTI, DISC, etc. https://www.newyorker.com/magazine/2018/09/10/what-personality-tests-really-deliver
And here is another nail in the coffin of learning styles: https://theeconomyofmeaning.com/2018/09/04/why-its-improbable-that-adapting-to-learning-styles-will-ever-work-new-review/ Thanks to the amazing Pedro De Bruyckere
Daniel Engbar, writing in Slate Magazine, argued earlier this year that the worries about Debunker were overstated. Here is his article: https://slate.com/health-and-science/2018/01/weve-been-told-were-living-in-a-post-truth-age-dont-believe-it.html
Today Engbar (on Twitter) pointed us to another article which found evidence that another worry about debunking—that too many persuasion attempts might backfire—has been found to be false. It looks like more debunking efforts are better than fewer debunking efforts! Here’s the pre-publication research article: http://www.emc-lab.org/uploads/1/1/3/6/113627673/ecker.2018ip.jarmac.pdf
I’m sure there will be more studies to come. We live in a time where fake news, misinformation, deception is rampant. Researchers will want to take a look to see what can be done.
I think using what is known about persuasion is important. Those in the business of healthcare intervention design might be on the cutting edge of applying this in the real world.
I am moderating a webinar teaching litigation support people how to write reports. The lovely slide of Edgar Dale’s cone defiled by percentages appeared with a “University of Texas” at the top as if this was a theory endorsed by them. That led me to a series of searches that ultimately landed me on this website.
Thanks for sharing Troy! Here’s a good article related to this very thing: https://www.worklearning.com/2015/01/05/mythical-retention-data-the-corrupted-cone/
This piece from Campus Technology features people from my college who are part of the accessibility initiative at our university. It’s shot through with references to Learning Styles as a justification and argument in favor Universal Design for learning. One of the ways you make your learning “universal” is to attend to different learning styles. It’s maddening.
This is a great example of debunking a myth.
The myth is: “70% of change efforts fail.”
It’s not a learning myth, but the speaker does a great job of slowly, methodically crushing the myth.
On today’s CBS This Morning, they presented a story on U of Vermont’s efforts to affect positive choices in their incoming first year students. They said the professor and staff are using neuroscience to do so. They aren’t and it’s very odd. They are using behavioral psychology, behavioral economics, and other cognitive principles, in my opinion. What do you think? https://www.cbs.com/shows/cbs_this_morning/video/WOopsW_X28BMsNGK2S7pyQBn9nLUmZA4/substance-free-dorms-at-university-of-vermont-encourage-wellness/
While “brain science” and “neuro-” seem to be really hot nowadays, the NLI is at the forefront of pushing this belief system.
I worked at a large company, in the Sales training group. They totally swallowed the NLI. They did not have a grounded learning program, could not identify what competencies actually were required to be successful in their jobs. They had a pretty LMS, with pretty, flashy elearning, mostly unrelated to the target audience’s actual skills. And they slipped in NLI lingo every chance they could.
I went to an NLI conference. Every insight NLI offered, while flashing pictures of the brain lighting up in different colors on the screen, came from Adult Learning, Instructional Design, and real learning science.
I approached the NLI leader at the end of one session, proferred that observation, and asked if he was familiar with real learning research. His eyes sort of glazed over and he muttered “Interesting.” before heading off to engage with his starry-eyes acolytes.
Will, is there a section devoted to debunking “Neuro-” (fill in the blank) approaches to learning?
We don’t have a special collection on neuroscience myths, but it is well-deserved. I wrote about this on my blog not too long ago: https://www.worklearning.com/2016/01/05/brain-based-learning-and-neuroscience-what-the-research-says/
A Neuroscience section might be useful.
NLI is wasting millions and millions of dollars of corporate L&D budgets.
It is essentially useless.
I enjoyed this article in Forbes. It says that professional athletes, given their visibility and influence, should be careful what they say about science. Those of us with influence in the learning field, should also be cautioned as well. What we say about learning matters! Here’s the Forbes article: https://www.forbes.com/sites/marshallshepherd/2018/12/13/a-cautionary-note-for-athletes-kids-are-watching-what-you-say-about-science/
You’re right, Will, this is an interesting article. We all should be cautioned because words DO matter.
BUT…we have to be careful to not veer into censorship. I’m always a little leery of the argument that we have to protect people whose critical thinking skills are not very well developed, be they kids or less educated adults, from viewpoints that might “harm” them or lead them to believe falsehoods. That’s a little condescending to me, and taken in a different context, sounds a little like brain washing. That’s not the point of this writer, I don’t think, but it’s an argument I’ve heard before in defense of censorship.
All voices should be heard, including ones we don’t agree with or that
are patently false. Don’t believe for a second that not hearing
falsehoods or myths means nobody believes them. Only by getting them
out in the air and then addressing them can they be countered. That’s why I appreciate the Debunker Club. We change minds by pointing to those voices and providing the evidence why they’re wrong.
A nice video against learning styles: https://www.youtube.com/watch?v=855Now8h5Rs
While looking for scholarly studies including meta-analyses for the efficacy of “Emotional Intelligence” training I found a couple studies with some positive results, but there were problems like sample-size and study design that are hard to dismiss.
What I found more interesting is the amount of churn in the literature about the definition of EI and/or lack of experimental research despite:
A. The prevalence of the term appearing in PubMed since 1985: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6246631/bin/fpsyg-09-02155-g001.jpg
B. Comments in the literature such as:
– November 2018 “This paper discusses one of the most pervasive problems regarding EI-related individual differences, namely, the lack of a meaningful theoretical framework. ” https://dx.doi.org/10.3389%2Ffpsyg.2018.02155
– April 2018 “Human resource practitioners place value on selecting and training a more emotionally intelligent workforce. Despite this, research has yet to systematically investigate whether emotional intelligence can in fact be trained.” https://doi.org/10.1016/j.hrmr.2018.03.002
– June 2004 “…this review demonstrates that recent research has made important strides towards understanding the usefulness of EI in the workplace. However, the ratio of hyperbole to hard evidence is high, with over‐reliance in the literature on expert opinion, anecdote, case studies, and unpublished proprietary surveys.” https://doi.org/10.1111/j.1464-0597.2004.00176.x
I am not suggesting that EI isn’t something real, but I have a feeling all the EI Training that corporations are spending millions of dollars on may be worthy of attention. Does anyone have any good references for this?
An interesting two-fer spotted on LinkedIn.
A user posted a link to the infamous Learning Pyramid.
Another user commented:
“Frank Lopez Strategic Management at Harvard Extension School
This pyramid puts a renewed spin on the classic 70-20-10 model — Learning (Lecture, Reading, Audiovisual) is at 20% — Connect (Demonstration and Discussion) at 30% — and Experience (Practice Doing and Teach Others) at 50%. The emergence of CoP and CoE’s playing a larger role in promoting a learning culture.”
This conflation of two debunked semi-scams was just missing a mention of Learning Styles for a trifecta!
Oh, I love it when learning myths collide!
So, this isn’t a myth I’ve heard about, but it sounds dangerously close to being one. The idea is that the optimal score on a test should be 85%. Higher than that and the material was too easy for you; less than that and the material was too hard. The 85% mark hits the “sweet spot” for optimal challenge.
I like the idea of learning being like a video game: too easy and it’s boring; too hard and it’s frustrating. But attaching numbers like this always makes me uncomfortable. I haven’t read the cited articles in this piece yet, but I will and will chime in again then. In the meantime, is it just my Myth-dar that’s going off here?
I just saw the article on this (from three days ago on 14 January 2019), and accessed the original study. The research is NOT really generalizable. They do a study where (and you won’t believe this), “ambiguous stimuli must be sorted into one of two classes.” Where do real learners learn by sorting ambiguous stimuli? This is complete poppycock!
Unfortunately, the paper has a ton of mathematical formulae, so it will look super credible and people may pick this up…
Watch for it… We will soon be told to shoot for an 85% cutoff score on our knowledge checks!
I don’t even know what that means…”sorting ambiguous stimuli?” Do they mean…guessing?
I went back to the research article to find you an example. In reading the paper more closely, I found that they didn’t even do an actual experiment, they just did mathematical simulations.
Here is a description of the kind of task they are simulating. A quote from the article:
A new article on Listening Styles was just published in the Feb edition of TD magazine and I’d like to get the groups thoughts on it. The author speaks in several places in the article about “research” on listening styles, but provides no sources. So, as a good debunker, I go to her website and download the Validation Research paper on their instrument, and of course there is no source info, it just states ” ECHO has undergone rigorous analysis in all four of these categories in conjunction
with experts at the University of Mississippi.”
The fact that she talks about the research around learning styles and that we all learn differently got my back up right away. Plus it is a proprietary tool that the article is about. I’ve never heard about listening styles before, so thought I would throw it out to the group. A google search show a few results, but not in a training context.
Here is the article:
The author’s website:
The research paper:
Curious to hear what the group thinks.
IBM Training (https://www.ibm.com/blogs/ibm-training/) has available (today) on it’s website a report called The Value of Training. It is copyrighted in 2014. It encourages the use of learning styles.
Here is a good one… A senior manager was fired, at least partially, because she wouldn’t take the Myers-Briggs Personality test, which is not valid nor reliable. Here is the story: https://www.cjr.org/analysis/the-markup.php
I am beyond beside myself. I am taking a course on instructional design from a respected online learning consortium (in fact, that is their name). Last week we had a reading that featured 6 full pages extolling learning styles. This seek, we were treated to a reading that included this gem:
I might shoot myself. No, I won’t…I’ll drag myself over to the water cooler discussion and try to change some minds.
Wow! You got a double shot of learning mythness! And I never seen this version before! Thanks for sharing, and take it easy! SMILE.
What are the implications for learning when we think there are “digital natives” and that they should be treated differently? I’ve recently had someone at work (a learning company) suggest that, for example, we should run tests on digital natives in 10 years time to understand what screentime did or didn’t do to/for them. I believe the concept of “digital natives” was debunked some time ago. Thoughts?
Barb, I think you’re right. I remember seeing research that said different generations don’t really learn differently and that old people can be just as facile as younger people with technology. I didn’t study this closely, so I’m interested to hear what others have seen.
Job requirement on a job post from Harvard Business School (Associate Director, Learning and Development):
“Experience with (or a willingness to obtain certification) diagnostic/assessment tools including, but not limited to ESCI, LVI, Disc, Strong Interest Inventory, and Myers Briggs strongly preferred.”
Great article on the constant need for debunking by Brett Christensen!
Including a story of how a speaker advocated for generational differences in learning and then proved the opposite when she, just like her son, went to YouTube to learn how to fix something. https://workplaceperformanceblog.wordpress.com/2019/06/06/debunking-bad-training-practices-is-never-ending-work/
I’ve tried to find a good debunk for this sillyness:
Types of learners “how they learn best” in this advertisement/infomercial email
The typical bastardization of the Dayle’s cone.
Today I received an email newsletter from MindTools, a company I respect, with an article about active listening, https://www.mindtools.com/CommSkll/ActiveListening.htm. Near the top of the article, I read, “… research suggests that we only remember between 25 percent and 50 percent of what we hear, as described by Edgar Dale’s Cone of Experience.” I wrote to the company and shared the debunker page, https://debunker.club/2015/05/22/people-dont-remember-10-20-30-not-even-on-a-cone/.
Recently FAA released its Aviation Instructor Handbook (2020 edition) which is a detailed reference for instructors that includes comprehensive content on learning theories and principles. I observed that it contained 2 learning myths and reported it to their notice. They have adjudicated the comment and have gracefully accepted to implement the correction in the next revision by removing the associated content from the manual. The myths reported were about Left Brain/Right Brain and Learning Styles (Kinesthetic, Visual, Auditory learners etc).
Good number of myths and debunking them: