Growth and fixed mindset classification using NLP approach

Ege Topkoc

Abstract


Theories regarding the growth and fixed mindsets have emerged in the last couple of decades. It focuses on how peoples’ brains think and handle problems. People with a fixed mindset tend to feel they cannot improve or overcome difficult situations. On the other hand, people with a growth mindset tend to focus on the process and believe they can improve no matter where they started. We hypothesized that these mindsets can also be detected from text. Our goal was to design an NLP framework to classify sentences as growth or fixed mindsets. We used data generated by a Large Language Model (LLM) for our dataset: around 2000 sentences. We discovered a relationship between the sentiment of the sentence and the mindset type. Our model was a merged model which extracted features from words using word embeddings and used manually extracted features such as sentiment scores. A bidirectional Long Short-Term Memory (LSTM) was used to provide more context to both the beginning as well as the end of sentence. The final model had an F1 score of 0.99. The model can be improved by using a greater dataset, preferably created by humans instead of an AI.


Keywords


NLP, Mindsets, LSTM, Merged, Sentiment Analysis

Full Text:

PDF

References


Good C, Aronson J, Inzlicht M. Improving adolescents’ standardized test performance: An intervention to reduce the effects of stereotype threat. Journal of Applied Developmental Psychology. 2003;24(6):645–662.

Blackwell LS, Trzesniewski KH, Dweck CS. Implicit Theories of Intelligence Predict Achievement Across an Adolescent Transition: A Longitudinal Study and an Intervention. Child Development. 2007;78(1):246–263.

Dweck CS. Mindset : The new psychology of success. New York: Ballantine Books; 2006.

Yeager DS, Hanselman P, Walton GM, Murray JS, Crosnoe R, Muller C, et al. Nature. 2019;573:364–369.

Diener CI, Dweck CS. An analysis of learned helplessness: Continuous changes in performance, strategy, and achievement cognitions following failure. Journal of Personality and Social Psychology. 1978;36(5):451–462.

Diener CI, Dweck CS. An analysis of learned helplessness: II. The processing of success. Journal of Personality and Social Psychology. 1980;39(5):940–952.

Brunson BI, Matthews KA. The Type A coronary-prone behavior pattern and reactions to uncontrollable stress: An analysis of performance strategies, affect, and attributions during failure. Journal of Personality and Social Psychology. 1981;40(5):906–918.

Elliott ES, Dweck CS. Goals: An approach to motivation and achievement. Journal of Personality and Social Psychology. 1988;54(1):5–12.

Parker N. NLP Demystified 13: Recurrent Neural Networks and Language Models [Internet]. YouTube. 2022. Available from: https://www.youtube.com/watch?v=y0FqGWbfkQw&t=1897s

Zarzycki K, Ławryńczuk M. LSTM and GRU Neural Networks as Models of Dynamical Processes Used in Predictive Control: A Comparison of Models Developed for Two Chemical Reactors. Sensors. 2021;21(16):5625.

Myers IB. The Myers-Briggs Type Indicator: Manual (1962). Palo Alto: Consulting Psychologists Press. 1962.

Jung CG. Psychological types. New York: Routledge; 2017.

Geyer P. Quantifying Jung: The Origin And Development Of The Myers-Briggs Type Indicator [MSc Thesis]. Melbourne: University of Melbourne; 1995.

Stricker LJ, Ross J. Intercorrelations and Reliability of the Myers-Briggs Type Indicator Scales. Psychological Reports. 1963;12:287–293.

Ramezani M, Feizi-Derakhshi MR, Balafar MA. Knowledge Graph-Enabled Text-Based Automatic Personality Prediction. Comput Intell Neurosci. 2022.

Bergman MK. A Knowledge Representation Practionary: Guidelines Based on Charles Sanders Peirce. Cham: Springer International Publishing; 2018.

Gaind B, Syal V, Padgalwar S. Emotion Detection and Analysis on Social Media. 2019. Available from: https://arxiv.org/pdf/1901.08458.pdf

Guo J. Deep learning approach to text analysis for human emotion detection from big data. Journal of Intelligent Systems. 2022;31(1):113–126.

Jain P, Srinivas KR, Vichare A. Depression and Suicide Analysis Using Machine Learning and NLP. Journal of Physics: Conference Series. 2022.

Cocarascu O, Toni F. Identifying attack and support argumentative relations using deep learning. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017;1374–1379.

Hutto C, Gilbert E. VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text. Proceedings of the International AAAI Conference on Web and Social Media. 2014;8(1).

Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. 2015.

Godoy D. Understanding binary cross-entropy/log loss: a visual explanation [Internet]. Towards Data Science; 2018. Available from: https://towardsdatascience.com/understanding-binary-cross-entropy-log-loss-a-visual-explanation-a3ac6025181a

Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, et al. Scikit-Learn: Machine Learning in Python. J Mach Learn Res, 2011;12: 2825–2830.

McInnes L, Healy J, Melville J. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. 2018. Available from: https://arxiv.org/pdf/1802.03426.pdf

Bird S, Klein E, Loper E. Natural language processing with Python. Sebastopol: O’Reilly Media Inc; 2009.




DOI: https://doi.org/10.23954/osj.v9i1.3555

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Open Science Journal (OSJ) is multidisciplinary Open Access journal. We accept scientifically rigorous research, regardless of novelty. OSJ broad scope provides a platform to publish original research in all areas of sciences, including interdisciplinary and replication studies as well as negative results.