Skip to content
/ cim-im931 Public

IM931 Interdisciplinary Approaches to Machine Learning

Notifications You must be signed in to change notification settings

mccc/cim-im931

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 

Repository files navigation

IM931:Interdisciplinary Approaches to Machine Learning

Spring 2019, Centre for Interdisciplinary Methodologies, University of Warwick

Outline

This module serves as an interdisciplinary introduction to contemporary machine learning research and applications, specifically focusing on the techniques of deep learning which use convolutional and/or recurrent neural network structures to both recognize and generate content from image, text, signals, sound, speech, and other forms of predominantly unstructured data. Using a combination of theoretical/conceptual/historical analysis and practical programming projects in the R programming language, the module will teach the basic application of these techniques while also conveying the historical origins and ethical implications of such applications.

Module Convenor

Dr Michael Castelle

Assessments

  • 40% Laboratory Assignment and Report (15 CATS: 2000 words, 20/30 CATS: 3000 words)
  • 10% Group Presentation
  • 50% Final Project and Report (15 CATS: 2000 words; 20 CATS: 3000 words; 30 CATS: 4000 words)

Reading Key

  • [R] Required reading
  • (*) Available for Presentation
  • [CS] Computer Science, [CogS] Cognitive Science, [H] History, [L] Law, [LT] Literary Theory, [M] Math/s, [P] Psychology, [S] Semiotics, [SS] Social Sciences, [Stat] Statistics, [T] Theory

Week 01. Introduction: A Social History of Machine Learning.

  • [R] Chollet, Francois, and J.J. Allaire. 2018. “What Is Deep Learning?” In Deep Learning with R, 1st edition, 3–23. Shelter Island, NY: Manning Publications.
  • [SS] Mackenzie, Adrian. 2017. 2017a. “Introduction: Into the Data.” In Machine Learners: Archaeology of a Data Practice, 1–19. Cambridge, MA: MIT Press.
  • (*) [Stat] Breiman, Leo. 2001. “Statistical Modeling: The Two Cultures (with Comments and a Rejoinder by the Author).” Statistical Science 16 (3): 199–231.
  • (*) [SS] Boelart, Julien, and Étienne Ollion. 2018. “The Great Regression Machine Learning, Econometrics, and the Future of Quantitative Social Sciences.” Revue Française de Sociologie 59 (475–506)
  • [H] Jones, Matthew L. 2018. “How We Became Instrumentalists (Again): Data Positivism since World War II.” Historical Studies in the Natural Sciences 48 (5): 673–84.

Week 02. Vectorization, Tensorization, and Epistemic Ensembles

  • [R] Chollet and Allaire 2018, pp. 24-49.
  • [R] [CS] Domingos, Pedro. 2012. “A Few Useful Things to Know About Machine Learning.” Commun. ACM 55 (10): 78–87.
  • [R] [M] Jordan, Michael I. 1986. “An Introduction to Linear Algebra in Parallel Distributed Processing.” In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, by D. E. Rumelhart, J. L. McClelland, and PDP Research Group, 1:365–422.
  • [CS] Russell, Stuart. 1996. “Machine Learning.” In Artificial Intelligence, edited by Margaret A. Boden, 89–133. Elsevier.
  • (*) [SS/H] Olazaran, Mikel. 1996. “A Sociological Study of the Official History of the Perceptrons Controversy.” Social Studies of Science 26 (3): 611–59.
  • (*) [SS] Bowker, Geoffrey, and Susan Leigh Star. 1999. “Some Tricks of the Trade in Analyzing Classification.” In Sorting Things Out, 33–50. MIT Press.

Week 03. Layers, Loss, and Classification

  • [R] Chollet and Allaire 2018, 50-83.
  • [R] [H] Buckner, Cameron, James Garson, and James Garson. 2018. “Connectionism and Post-Connectionist Models.” The Routledge Handbook of the Computational Mind. September 13, 2018.
  • [R] Danks, David. 2014. “Learning.” The Cambridge Handbook of Artificial Intelligence. June 2014.
  • (*) [SS] Espeland, Wendy Nelson/Stevens, Mitchell L.: Commensuration as a Social Process. Annual Review of Sociology, 24 1998, Nr. 1, 313–343
  • (*) [SS] Kockelman, P. (2013). The anthropology of an equation. Sieves, spam filters, agentive algorithms, and ontologies of transformation. HAU: Journal of Ethnographic Theory, 3(3), 33–61.
  • [CS] Langley, Pat. 2011. “The Changing Science of Machine Learning.” Machine Learning 82 (3): 275–79.

Week 04. Machine Learning Workflows: Parameters, Hyperparameters, and Technical Ensembles

  • [R] Chollet and Allaire 2018, 84-107.
  • [R] [SS] Mackenzie, Adrian. 2015. “The Production of Prediction: What Does Machine Learning Want?” European Journal of Cultural Studies 18 (4–5): 429–45.
  • (*) [T] Serres, Michel. 1982. “The Origin of Language: Biology, Information Theory, & Thermodynamics.” In Hermes: Literature, Science, Philosophy, edited by Josué V. Harari and David F. Bell, 71–83. Baltimore: Johns Hopkins University Press.
  • (*) [CS] Hinton, Geoffrey E., Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. 2012. “Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors,” July. https://arxiv.org/abs/1207.0580v1.
  • (*) [P] Plunkett, Kim, and Chris Sinha. 1992. “Connectionism and Developmental Theory.” British Journal of Developmental Psychology 10 (3): 209–54.
  • [CS] Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. 1986. “Learning Representations by Back-Propagating Errors.” Nature 323 (6088): 533–36
  • [H] "Geoff Hinton." Anderson, James A./Rosenfeld, Edward, editors: Talking Nets: An Oral History of Neural Networks. MIT Press, 1998.

Week 05. Image to Symbol: Convolutional Neural Networks (CNNs), Supervised Classification, and Iconicity

  • [R] Chollet and Allaire 2018, 134-163.
  • [R] [CS] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. “ImageNet Classification with Deep Convolutional Neural Networks.” In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a Meeting Held December 3-6, 2012, Lake Tahoe, Nevada, United States, edited by Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, 1106–1114.
  • (*) [SS] Jaton, Florian. 2017. “We Get the Algorithms of Our Ground Truths: Designing Referential Databases in Digital Image Processing.” Social Studies of Science 47 (6): 811–40.
  • (*) [S] Elkins, James. 2003. “What Does Peirce’s Sign Theory Have to Say to Art History?” Culture, Theory and Critique 44 (1): 5–22.
  • [S] Peirce, Charles S. 1931. Collected Papers of Charles Sanders Peirce. Vol. 2, 2.227–2.308.
  • [CS] LeCun, Y., B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. 1989. “Backpropagation Applied to Handwritten Zip Code Recognition.” Neural Comput. 1 (4): 541–551.

Week 06 Reading Week — NO CLASS

Laboratory Assignment and Report DUE Wednesday 13th February

Week 07. Sequence to Sequence: Recurrent Neural Networks (RNNs), Machine Translation, Structuralism and Poetics.

  • [R] Chollet and Allaire 2018, 164-217.
  • [R] [CogS] Bates, Elizabeth A., and Jeffrey L. Elman. 1993. “Connectionism and the Study of Change.” In Brain Development and Cognition: A Reader. Oxford: Blackwell Publishers. 1993. Pp., 623–42. Oxford: Blackwell Publishers.
  • [R] [CS] Karpathy, Andrej. 2015. “The Unreasonable Effectiveness of Recurrent Neural Networks.” May 21, 2015. http://karpathy.github.io/2015/05/21/rnn-effectiveness/.
  • (*) [CogS] Elman, Jeffrey L. 1990. “Finding Structure in Time.” Cognitive Science 14 (2): 179–211. https://doi.org/10.1207/s15516709cog1402_1.
  • (*) Saussure, Ferdinand de. 1916. Course in General Linguistics. McGraw-Hill. Intro., ch. 2-5; pt. I, ch. 1-2; pt. II. Jordan, M. I. (1999). Recurrent Networks. In R. A. Wilson & F. C. Keil (Eds.), The MIT Encyclopedia of the Cognitive Sciences (Cdr edition, pp. 709–712). Cambridge, Mass: MIT Press.
  • [CS] Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, 3111–3119. NIPS’13.
  • [CS] Hochreiter, S., and J. Schmidhuber. 1997. “Long Short-Term Memory.” Neural Computation 9 (8): 1735–80.
  • [CS] Gers, Felix A., Jürgen A. Schmidhuber, and Fred A. Cummins. 2000. “Learning to Forget: Continual Prediction with LSTM.” Neural Comput. 12 (10): 2451–2471.
  • [CS] Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Distributed Representations of Words and Phrases and Their Compositionality.” In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, 3111–3119. NIPS’13.
  • [CS] Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need,” June. https://arxiv.org/abs/1706.03762v5.

Week 08. Generative Adversarial Networks, Creative AI, and the Habitus.

Final Project Draft Proposal DUE

Week 09. Interpretability and Dialogicality

  • [R] [SS] Burrell, Jenna. 2016. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1)
  • (*) [SS] Jones, Raya A. 2017. “What Makes a Robot ‘Social’?” Social Studies of Science, 1–24.
  • (*) [CogS] Marková, Ivana. 2003. Dialogicality and Social Representations: The Dynamics of Mind. Cambridge: Cambridge University Press.
  • (*) [LT] Bakhtin, M. M. 1982. The Dialogic Imagination: Four Essays. Edited by Michael Holquist. Translated by Caryl Emerson. Austin, Tex: University of Texas Press.

Week 10 Agency: Reinforcement Learning, Autonomous Agents, and Alignment

Final Project Revised Proposal DUE

  • (*) [SS] Goffman, Erving. 1986. Frame Analysis: An Essay on the Organization of Experience. Boston: Northeastern University Press.
  • (*) [CS] Leike, Jan, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. 2018. “Scalable Agent Alignment via Reward Modeling: A Research Direction.” ArXiv:1811.07871 [Cs, Stat], November. http://arxiv.org/abs/1811.07871.
  • (*) [CS, L] Hadfield-Menell, Dylan, and Gillian Hadfield. 2018. “Incomplete Contracting and AI Alignment.” ArXiv:1804.04268 [Cs], April. http://arxiv.org/abs/1804.04268.
  • (*) [CS] Zahavy, Tom, Nir Ben Zrihem, and Shie Mannor. 2016. “Graying the Black Box: Understanding DQNs,” February. https://arxiv.org/abs/1602.02658v4.
  • [CS] Moravčík, Matej, Martin Schmid, Neil Burch, Viliam Lisý, Dustin Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael Bowling. 2017. “DeepStack: Expert-Level Artificial Intelligence in Heads-up No-Limit Poker.” Science 356 (6337): 508–13.

About

IM931 Interdisciplinary Approaches to Machine Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published