QuestionsAnswered.net
What's Your Question?

How to Write a Research Paper
Writing a research paper is a bit more difficult that a standard high school essay. You need to site sources, use academic data and show scientific examples. Before beginning, you’ll need guidelines for how to write a research paper.
Start the Research Process
Before you begin writing the research paper, you must do your research. It is important that you understand the subject matter, formulate the ideas of your paper, create your thesis statement and learn how to speak about your given topic in an authoritative manner. You’ll be looking through online databases, encyclopedias, almanacs, periodicals, books, newspapers, government publications, reports, guides and scholarly resources. Take notes as you discover new information about your given topic. Also keep track of the references you use so you can build your bibliography later and cite your resources.
Develop Your Thesis Statement
When organizing your research paper, the thesis statement is where you explain to your readers what they can expect, present your claims, answer any questions that you were asked or explain your interpretation of the subject matter you’re researching. Therefore, the thesis statement must be strong and easy to understand. Your thesis statement must also be precise. It should answer the question you were assigned, and there should be an opportunity for your position to be opposed or disputed. The body of your manuscript should support your thesis, and it should be more than a generic fact.
Create an Outline
Many professors require outlines during the research paper writing process. You’ll find that they want outlines set up with a title page, abstract, introduction, research paper body and reference section. The title page is typically made up of the student’s name, the name of the college, the name of the class and the date of the paper. The abstract is a summary of the paper. An introduction typically consists of one or two pages and comments on the subject matter of the research paper. In the body of the research paper, you’ll be breaking it down into materials and methods, results and discussions. Your references are in your bibliography. Use a research paper example to help you with your outline if necessary.
Organize Your Notes
When writing your first draft, you’re going to have to work on organizing your notes first. During this process, you’ll be deciding which references you’ll be putting in your bibliography and which will work best as in-text citations. You’ll be working on this more as you develop your working drafts and look at more white paper examples to help guide you through the process.
Write Your Final Draft
After you’ve written a first and second draft and received corrections from your professor, it’s time to write your final copy. By now, you should have seen an example of a research paper layout and know how to put your paper together. You’ll have your title page, abstract, introduction, thesis statement, in-text citations, footnotes and bibliography complete. Be sure to check with your professor to ensure if you’re writing in APA style, or if you’re using another style guide.
MORE FROM QUESTIONSANSWERED.NET

Skip to Main Content
IEEE Account
- Change Username/Password
- Update Address
Purchase Details
- Payment Options
- Order History
- View Purchased Documents
Profile Information
- Communications Preferences
- Profession and Education
- Technical Interests
- US & Canada: +1 800 678 4333
- Worldwide: +1 732 981 0060
- Contact & Support
- About IEEE Xplore
- Accessibility
- Terms of Use
- Nondiscrimination Policy
- Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2023 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Journal of Computer Science and Technology
Journal of Computer Science and Technology (JCST) is an international forum for scientists and engineers involved in all aspects of computer science and technology to publish high quality, refereed papers. It is an international research journal sponsored by Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), and China Computer Federation (CCF). The journal is jointly published by Science Press of China and Springer on a bimonthly basis in English.
The journal offers survey and review articles from experts in the field, promoting insight and understanding of the state of the art, and trends in technology. The contents include original research and innovative applications from all parts of the world. While the journal presents mostly previously unpublished materials, selected conference papers with exceptional merit are also published, at the discretion of the editors.
Coverage includes computer architecture and systems, artificial intelligence and pattern recognition, computer networks and distributed computing, computer graphics and multimedia, software systems, data management and data mining, theory and algorithms, emerging areas, and more.
- An international forum for scientists and engineers involved in all aspects of computer science and technology
- Promotes insight and understanding of the state of the art, and trends in technology
- Includes original research and innovative applications from all parts of the world
- Presents new research, and selected conference papers
Journal information
Journal metrics, latest issue.

Issue 6, December 2022
December 3, 2022
Latest articles
Tetris: a heuristic static memory management framework for uniform memory multicore neural network accelerators, authors (first, second and last of 6).
- Xiao-Bing Chen
- Yun-Ji Chen
- Content type: Regular Paper
- Published: 30 November 2022
- Pages: 1255 - 1270
SOCA-DOM: A Mobile System-on-Chip Array System for Analyzing Big Data on the Move
Authors (first, second and last of 7).
- Jiang-Yi Liu
- Pages: 1271 - 1289
TLP-LDPC: Three-Level Parallel FPGA Architecture for Fast Prototyping of LDPC Decoder Using High-Level Synthesis
- Yi-Fan Zhang
- Pages: 1290 - 1306
RV16: An Ultra-Low-Cost Embedded RISC-V Processor Core
- Yuan-Hu Cheng
- Li-Bo Huang
- Bing-Cai Sui
- Pages: 1307 - 1319
Answering Non-Answer Questions on Reverse Top- k Geo-Social Keyword Queries
- Xue-Qin Chang
- Cheng-Yang Luo
- Yun-Jun Gao
- Pages: 1320 - 1336
Journal updates
Call for papers.
(Note: Journal of Computer Science and Technology does not accept new special issue proposals.)
Special Sections in 2022
Scalable Data Science Submission Deadline: April 1, 2022 Expected Publication: September 2022 Download PDF and find more
Special Sections in 2021
Self-Learning with Deep Neural Networks Submission Deadline: November 20, 2021 Expected Publication: June 2022 Download PDF and find more
Software Systems 2021 Submission Deadline: May 31, 2021 Expected Publication: September 2021 (or subsequent issue) Download PDF and find more
AI4DB and DB4AI Submission Deadline: February 1, 2021 Expected Publication: July 2021 Download PDF and find more
Learning from Small Samples Submission Deadline: October 20, 2020 Expected Publication: May 2021 Download PDF and find more
For authors
Working on a manuscript.
Avoid the most common mistakes and prepare your manuscript for journal editors.
About this journal
- ACM Digital Library
- Chinese Science Citation Database
- EBSCO Applied Science & Technology Source
- EBSCO Computer Science Index
- EBSCO Computers & Applied Sciences Complete
- EBSCO Discovery Service
- EBSCO Engineering Source
- EBSCO STM Source
- EI Compendex
- Google Scholar
- Journal Citation Reports/Science Edition
- OCLC WorldCat Discovery Service
- ProQuest ABI/INFORM
- ProQuest Advanced Technologies & Aerospace Database
- ProQuest-ExLibris Primo
- ProQuest-ExLibris Summon
- Science Citation Index Expanded (SCIE)
- TD Net Discovery Service
- UGC-CARE List (India)
Rights and permissions
Springer policies
© Institute of Computing Technology, Chinese Academy of Sciences
- Review article
- Open Access
- Published: 02 October 2017
Computer-based technology and student engagement: a critical review of the literature
- Laura A. Schindler ORCID: orcid.org/0000-0001-8730-5189 1 ,
- Gary J. Burkholder 2 , 3 ,
- Osama A. Morad 1 &
- Craig Marsh 4
International Journal of Educational Technology in Higher Education volume 14 , Article number: 25 ( 2017 ) Cite this article
302k Accesses
98 Citations
40 Altmetric
Metrics details
Computer-based technology has infiltrated many aspects of life and industry, yet there is little understanding of how it can be used to promote student engagement, a concept receiving strong attention in higher education due to its association with a number of positive academic outcomes. The purpose of this article is to present a critical review of the literature from the past 5 years related to how web-conferencing software, blogs, wikis, social networking sites ( Facebook and Twitter ), and digital games influence student engagement. We prefaced the findings with a substantive overview of student engagement definitions and indicators, which revealed three types of engagement (behavioral, emotional, and cognitive) that informed how we classified articles. Our findings suggest that digital games provide the most far-reaching influence across different types of student engagement, followed by web-conferencing and Facebook . Findings regarding wikis, blogs, and Twitter are less conclusive and significantly limited in number of studies conducted within the past 5 years. Overall, the findings provide preliminary support that computer-based technology influences student engagement, however, additional research is needed to confirm and build on these findings. We conclude the article by providing a list of recommendations for practice, with the intent of increasing understanding of how computer-based technology may be purposefully implemented to achieve the greatest gains in student engagement.
Introduction
The digital revolution has profoundly affected daily living, evident in the ubiquity of mobile devices and the seamless integration of technology into common tasks such as shopping, reading, and finding directions (Anderson, 2016 ; Smith & Anderson, 2016 ; Zickuhr & Raine, 2014 ). The use of computers, mobile devices, and the Internet is at its highest level to date and expected to continue to increase as technology becomes more accessible, particularly for users in developing countries (Poushter, 2016 ). In addition, there is a growing number of people who are smartphone dependent, relying solely on smartphones for Internet access (Anderson & Horrigan, 2016 ) rather than more expensive devices such as laptops and tablets. Greater access to and demand for technology has presented unique opportunities and challenges for many industries, some of which have thrived by effectively digitizing their operations and services (e.g., finance, media) and others that have struggled to keep up with the pace of technological innovation (e.g., education, healthcare) (Gandhi, Khanna, & Ramaswamy, 2016 ).
Integrating technology into teaching and learning is not a new challenge for universities. Since the 1900s, administrators and faculty have grappled with how to effectively use technical innovations such as video and audio recordings, email, and teleconferencing to augment or replace traditional instructional delivery methods (Kaware & Sain, 2015 ; Westera, 2015 ). Within the past two decades, however, this challenge has been much more difficult due to the sheer volume of new technologies on the market. For example, in the span of 7 years (from 2008 to 2015), the number of active apps in Apple’s App Store increased from 5000 to 1.75 million. Over the next 4 years, the number of apps is projected to rise by 73%, totaling over 5 million (Nelson, 2016 ). Further compounding this challenge is the limited shelf life of new devices and software combined with significant internal organizational barriers that hinder universities from efficiently and effectively integrating new technologies (Amirault, 2012 ; Kinchin, 2012 ; Linder-VanBerschot & Summers 2015 ; Westera, 2015 ).
Many organizational barriers to technology integration arise from competing tensions between institutional policy and practice and faculty beliefs and abilities. For example, university administrators may view technology as a tool to attract and retain students, whereas faculty may struggle to determine how technology coincides with existing pedagogy (Lawrence & Lentle-Keenan, 2013 ; Lin, Singer, & Ha, 2010 ). In addition, some faculty may be hesitant to use technology due to lack of technical knowledge and/or skepticism about the efficacy of technology to improve student learning outcomes (Ashrafzadeh & Sayadian, 2015 ; Buchanan, Sainter, & Saunders, 2013 ; Hauptman, 2015 ; Johnson, 2013 ; Kidd, Davis, & Larke, 2016 ; Kopcha, Rieber, & Walker, 2016 ; Lawrence & Lentle-Keenan, 2013 ; Lewis, Fretwell, Ryan, & Parham, 2013 ; Reid, 2014 ). Organizational barriers to technology adoption are particularly problematic given the growing demands and perceived benefits among students about using technology to learn (Amirault, 2012 ; Cassidy et al., 2014 ; Gikas & Grant, 2013 ; Paul & Cochran, 2013 ). Surveys suggest that two-thirds of students use mobile devices for learning and believe that technology can help them achieve learning outcomes and better prepare them for a workforce that is increasingly dependent on technology (Chen, Seilhamer, Bennett, & Bauer, 2015 ; Dahlstrom, 2012 ). Universities that fail to effectively integrate technology into the learning experience miss opportunities to improve student outcomes and meet the expectations of a student body that has grown accustomed to the integration of technology into every facet of life (Amirault, 2012 ; Cook & Sonnenberg, 2014 ; Revere & Kovach, 2011 ; Sun & Chen, 2016 ; Westera, 2015 ).
The purpose of this paper is to provide a literature review on how computer-based technology influences student engagement within higher education settings. We focused on computer-based technology given the specific types of technologies (i.e., web-conferencing software, blogs, wikis, social networking sites, and digital games) that emerged from a broad search of the literature, which is described in more detail below. Computer-based technology (hereafter referred to as technology) requires the use of specific hardware, software, and micro processing features available on a computer or mobile device. We also focused on student engagement as the dependent variable of interest because it encompasses many different aspects of the teaching and learning process (Bryson & Hand, 2007 ; Fredricks, Blumenfeld, & Parks, 1994; Wimpenny & Savin-Baden, 2013 ), compared narrower variables in the literature such as final grades or exam scores. Furthermore, student engagement has received significant attention over the past several decades due to shifts towards student-centered, constructivist instructional methods (Haggis, 2009 ; Wright, 2011 ), mounting pressures to improve teaching and learning outcomes (Axelson & Flick, 2011 ; Kuh, 2009 ), and promising studies suggesting relationships between student engagement and positive academic outcomes (Carini, Kuh, & Klein, 2006 ; Center for Postsecondary Research, 2016 ; Hu & McCormick, 2012 ). Despite the interest in student engagement and the demand for more technology in higher education, there are no articles offering a comprehensive review of how these two variables intersect. Similarly, while many existing student engagement conceptual models have expanded to include factors that influence student engagement, none highlight the overt role of technology in the engagement process (Kahu, 2013 ; Lam, Wong, Yang, & Yi, 2012 ; Nora, Barlow, & Crisp, 2005 ; Wimpenny & Savin-Baden, 2013 ; Zepke & Leach, 2010 ).
Our review aims to address existing gaps in the student engagement literature and seeks to determine whether student engagement models should be expanded to include technology. The review also addresses some of the organizational barriers to technology integration (e.g., faculty uncertainty and skepticism about technology) by providing a comprehensive account of the research evidence regarding how technology influences student engagement. One limitation of the literature, however, is the lack of detail regarding how teaching and learning practices were used to select and integrate technology into learning. For example, the methodology section of many studies does not include a pedagogical justification for why a particular technology was used or details about the design of the learning activity itself. Therefore, it often is unclear how teaching and learning practices may have affected student engagement levels. We revisit this issue in more detail at the end of this paper in our discussions of areas for future research and recommendations for practice. We initiated our literature review by conducting a broad search for articles published within the past 5 years, using the key words technology and higher education , in Google Scholar and the following research databases: Academic Search Complete, Communication & Mass Media Complete, Computers & Applied Sciences Complete, Education Research Complete, ERIC, PsycARTICLES, and PsycINFO . Our initial search revealed themes regarding which technologies were most prevalent in the literature (e.g., social networking, digital games), which then lead to several, more targeted searches of the same databases using specific keywords such as Facebook and student engagement. After both broad and targeted searches, we identified five technologies (web-conferencing software, blogs, wikis, social networking sites, and digital games) to include in our review.
We chose to focus on technologies for which there were multiple studies published, allowing us to identify areas of convergence and divergence in the literature and draw conclusions about positive and negative effects on student engagement. In total, we identified 69 articles relevant to our review, with 36 pertaining to social networking sites (21 for Facebook and 15 for Twitter ), 14 pertaining to digital games, seven pertaining to wikis, and six pertaining to blogs and web-conferencing software respectively. Articles were categorized according to their influence on specific types of student engagement, which will be described in more detail below. In some instances, one article pertained to multiple types of engagement. In the sections that follow, we will provide an overview of student engagement, including an explanation of common definitions and indicators of engagement, followed by a synthesis of how each type of technology influences student engagement. Finally, we will discuss areas for future research and make recommendations for practice.
- Student engagement
Interest in student engagement began over 70 years ago with Ralph Tyler’s research on the relationship between time spent on coursework and learning (Axelson & Flick, 2011 ; Kuh, 2009 ). Since then, the study of student engagement has evolved and expanded considerably, through the seminal works of Pace ( 1980 ; 1984 ) and Astin ( 1984 ) about how quantity and quality of student effort affect learning and many more recent studies on the environmental conditions and individual dispositions that contribute to student engagement (Bakker, Vergel, & Kuntze, 2015 ; Gilboy, Heinerichs, & Pazzaglia, 2015 ; Martin, Goldwasser, & Galentino, 2017 ; Pellas, 2014 ). Perhaps the most well-known resource on student engagement is the National Survey of Student Engagement (NSSE), an instrument designed to assess student participation in various educational activities (Kuh, 2009 ). The NSSE and other engagement instruments like it have been used in many studies that link student engagement to positive student outcomes such as higher grades, retention, persistence, and completion (Leach, 2016 ; McClenney, Marti, & Adkins, 2012 ; Trowler & Trowler, 2010 ), further convincing universities that student engagement is an important factor in the teaching and learning process. However, despite the increased interest in student engagement, its meaning is generally not well understood or agreed upon.
Student engagement is a broad and complex phenomenon for which there are many definitions grounded in psychological, social, and/or cultural perspectives (Fredricks et al., 1994; Wimpenny & Savin-Baden, 2013 ; Zepke & Leach, 2010 ). Review of definitions revealed that student engagement is defined in two ways. One set of definitions refer to student engagement as a desired outcome reflective of a student’s thoughts, feelings, and behaviors about learning. For example, Kahu ( 2013 ) defines student engagement as an “individual psychological state” that includes a student’s affect, cognition, and behavior (p. 764). Other definitions focus primarily on student behavior, suggesting that engagement is the “extent to which students are engaging in activities that higher education research has shown to be linked with high-quality learning outcomes” (Krause & Coates, 2008 , p. 493) or the “quality of effort and involvement in productive learning activities” (Kuh, 2009 , p. 6). Another set of definitions refer to student engagement as a process involving both the student and the university. For example, Trowler ( 2010 ) defined student engagement as “the interaction between the time, effort and other relevant resources invested by both students and their institutions intended to optimize the student experience and enhance the learning outcomes and development of students and the performance, and reputation of the institution” (p. 2). Similarly, the NSSE website indicates that student engagement is “the amount of time and effort students put into their studies and other educationally purposeful activities” as well as “how the institution deploys its resources and organizes the curriculum and other learning opportunities to get students to participate in activities that decades of research studies show are linked to student learning” (Center for Postsecondary Research, 2017 , para. 1).
Many existing models of student engagement reflect the latter set of definitions, depicting engagement as a complex, psychosocial process involving both student and university characteristics. Such models organize the engagement process into three areas: factors that influence student engagement (e.g., institutional culture, curriculum, and teaching practices), indicators of student engagement (e.g., interest in learning, interaction with instructors and peers, and meaningful processing of information), and outcomes of student engagement (e.g., academic achievement, retention, and personal growth) (Kahu, 2013 ; Lam et al., 2012 ; Nora et al., 2005 ). In this review, we examine the literature to determine whether technology influences student engagement. In addition, we will use Fredricks et al. ( 2004 ) typology of student engagement to organize and present research findings, which suggests that there are three types of engagement (behavioral, emotional, and cognitive). The typology is useful because it is broad in scope, encompassing different types of engagement that capture a range of student experiences, rather than narrower typologies that offer specific or prescriptive conceptualizations of student engagement. In addition, this typology is student-centered, focusing exclusively on student-focused indicators rather than combining student indicators with confounding variables, such as faculty behavior, curriculum design, and campus environment (Coates, 2008 ; Kuh, 2009 ). While such variables are important in the discussion of student engagement, perhaps as factors that may influence engagement, they are not true indicators of student engagement. Using the typology as a guide, we examined recent student engagement research, models, and measures to gain a better understanding of how behavioral, emotional, and cognitive student engagement are conceptualized and to identify specific indicators that correspond with each type of engagement, as shown in Fig. 1 .
Conceptual framework of types and indicators of student engagement
Behavioral engagement is the degree to which students are actively involved in learning activities (Fredricks et al., 2004 ; Kahu, 2013 ; Zepke, 2014 ). Indicators of behavioral engagement include time and effort spent participating in learning activities (Coates, 2008 ; Fredricks et al., 2004 ; Kahu, 2013 ; Kuh, 2009 ; Lam et al., 2012 ; Lester, 2013 ; Trowler, 2010 ) and interaction with peers, faculty, and staff (Coates, 2008 ; Kahu, 2013 ; Kuh, 2009 ; Bryson & Hand, 2007 ; Wimpenny & Savin-Baden, 2013 : Zepke & Leach, 2010 ). Indicators of behavioral engagement reflect observable student actions and most closely align with Pace ( 1980 ) and Astin’s ( 1984 ) original conceptualizations of student engagement as quantity and quality of effort towards learning. Emotional engagement is students’ affective reactions to learning (Fredricks et al., 2004 ; Lester, 2013 ; Trowler, 2010 ). Indicators of emotional engagement include attitudes, interests, and values towards learning (Fredricks et al., 2004 ; Kahu, 2013 ; Lester, 2013 ; Trowler, 2010 ; Wimpenny & Savin-Baden, 2013 ; Witkowski & Cornell, 2015 ) and a perceived sense of belonging within a learning community (Fredricks et al., 2004 ; Kahu, 2013 ; Lester, 2013 ; Trowler, 2010 ; Wimpenny & Savin-Baden, 2013 ). Emotional engagement often is assessed using self-report measures (Fredricks et al., 2004 ) and provides insight into how students feel about a particular topic, delivery method, or instructor. Finally, cognitive engagement is the degree to which students invest in learning and expend mental effort to comprehend and master content (Fredricks et al., 2004 ; Lester, 2013 ). Indicators of cognitive engagement include: motivation to learn (Lester, 2013 ; Richardson & Newby, 2006 ; Zepke & Leach, 2010 ); persistence to overcome academic challenges and meet/exceed requirements (Fredricks et al., 2004 ; Kuh, 2009 ; Trowler, 2010 ); and deep processing of information (Fredricks et al., 2004 ; Kahu, 2013 ; Lam et al., 2012 ; Richardson & Newby, 2006 ) through critical thinking (Coates, 2008 ; Witkowski & Cornell, 2015 ), self-regulation (e.g., set goals, plan, organize study effort, and monitor learning; Fredricks et al., 2004 ; Lester, 2013 ), and the active construction of knowledge (Coates, 2008 ; Kuh, 2009 ). While cognitive engagement includes motivational aspects, much of the literature focuses on how students use active learning and higher-order thinking, in some form, to achieve content mastery. For example, there is significant emphasis on the importance of deep learning, which involves analyzing new learning in relation previous knowledge, compared to surface learning, which is limited to memorization, recall, and rehearsal (Fredricks et al., 2004 ; Kahu, 2013 ; Lam et al., 2012 ).
While each type of engagement has distinct features, there is some overlap across cognitive, behavioral, and emotional domains. In instances where an indicator could correspond with more than one type of engagement, we chose to match the indicator to the type of engagement that most closely aligned, based on our review of the engagement literature and our interpretation of the indicators. Similarly, there is also some overlap among indicators. As a result, we combined and subsumed similar indicators found in the literature, where appropriate, to avoid redundancy. Achieving an in-depth understanding of student engagement and associated indicators was an important pre-cursor to our review of the technology literature. Very few articles used the term student engagement as a dependent variable given the concept is so broad and multidimensional. We found that specific indicators (e.g., interaction, sense of belonging, and knowledge construction) of student engagement were more common in the literature as dependent variables. Next, we will provide a synthesis of the findings regarding how different types of technology influence behavioral, emotional, and cognitive student engagement and associated indicators.
Influence of technology on student engagement
We identified five technologies post-literature search (i.e., web-conferencing, blogs, wikis, social networking sites , and digital games) to include in our review, based on frequency in which they appeared in the literature over the past 5 years. One commonality among these technologies is their potential value in supporting a constructivist approach to learning, characterized by the active discovery of knowledge through reflection of experiences with one’s environment, the connection of new knowledge to prior knowledge, and interaction with others (Boghossian, 2006 ; Clements, 2015 ). Another commonality is that most of the technologies, except perhaps for digital games, are designed primarily to promote interaction and collaboration with others. Our search yielded very few studies on how informational technologies, such as video lectures and podcasts, influence student engagement. Therefore, these technologies are notably absent from our review. Unlike the technologies we identified earlier, informational technologies reflect a behaviorist approach to learning in which students are passive recipients of knowledge that is transmitted from an expert (Boghossian, 2006 ). The lack of recent research on how informational technologies affect student engagement may be due to the increasing shift from instructor-centered, behaviorist approaches to student-centered, constructivist approaches within higher education (Haggis, 2009 ; Wright, 2011 ) along with the ubiquity of web 2.0 technologies.
- Web-conferencing
Web-conferencing software provides a virtual meeting space where users login simultaneously and communicate about a given topic. While each software application is unique, many share similar features such as audio, video, or instant messaging options for real-time communication; screen sharing, whiteboards, and digital pens for presentations and demonstrations; polls and quizzes for gauging comprehension or eliciting feedback; and breakout rooms for small group work (Bower, 2011 ; Hudson, Knight, & Collins, 2012 ; Martin, Parker, & Deale, 2012 ; McBrien, Jones, & Cheng, 2009 ). Of the technologies included in this literature review, web-conferencing software most closely mimics the face-to-face classroom environment, providing a space where instructors and students can hear and see each other in real-time as typical classroom activities (i.e., delivering lectures, discussing course content, asking/answering questions) are carried out (Francescucci & Foster, 2013 ; Hudson et al., 2012 ). Studies on web-conferencing software deployed Adobe Connect, Cisco WebEx, Horizon Wimba, or Blackboard Collaborate and made use of multiple features, such as screen sharing, instant messaging, polling, and break out rooms. In addition, most of the studies integrated web-conferencing software into courses on a voluntary basis to supplement traditional instructional methods (Andrew, Maslin-Prothero, & Ewens, 2015 ; Armstrong & Thornton, 2012 ; Francescucci & Foster, 2013 ; Hudson et al., 2012 ; Martin et al., 2012 ; Wdowik, 2014 ). Existing studies on web-conferencing pertain to all three types of student engagement.
Studies on web-conferencing and behavioral engagement reveal mixed findings. For example, voluntary attendance in web-conferencing sessions ranged from 54 to 57% (Andrew et al., 2015 ; Armstrong & Thornton, 2012 ) and, in a comparison between a blended course with regular web-conferencing sessions and a traditional, face-to-face course, researchers found no significant difference in student attendance in courses. However, students in the blended course reported higher levels of class participation compared to students in the face-to-face course (Francescucci & Foster, 2013 ). These findings suggest while web-conferencing may not boost attendance, especially if voluntary, it may offer more opportunities for class participation, perhaps through the use of communication channels typically not available in a traditional, face-to-face course (e.g., instant messaging, anonymous polling). Studies on web-conferencing and interaction, another behavioral indicator, support this assertion. For example, researchers found that students use various features of web-conferencing software (e.g., polling, instant message, break-out rooms) to interact with peers and the instructor by asking questions, expressing opinions and ideas, sharing resources, and discussing academic content (Andrew et al., 2015 ; Armstrong & Thornton, 2012 ; Hudson et al., 2012 ; Martin et al., 2012 ; Wdowik, 2014 ).
Studies on web-conferencing and cognitive engagement are more conclusive than those for behavioral engagement, although are fewer in number. Findings suggest that students who participated in web-conferencing demonstrated critical reflection and enhanced learning through interactions with others (Armstrong & Thornton, 2012 ), higher-order thinking (e.g., problem-solving, synthesis, evaluation) in response to challenging assignments (Wdowik, 2014 ), and motivation to learn, particularly when using polling features (Hudson et al., 2012 ). There is only one study examining how web-conferencing affects emotional engagement, although it is positive suggesting that students who participated in web-conferences had higher levels of interest in course content than those who did not (Francescucci & Foster, 2013 ). One possible reason for the positive cognitive and emotional engagement findings may be that web-conferencing software provides many features that promote active learning. For example, whiteboards and breakout rooms provide opportunities for real-time, collaborative problem-solving activities and discussions. However, additional studies are needed to isolate and compare specific web-conferencing features to determine which have the greatest effect on student engagement.
A blog, which is short for Weblog, is a collection of personal journal entries, published online and presented chronologically, to which readers (or subscribers) may respond by providing additional commentary or feedback. In order to create a blog, one must compose content for an entry, which may include text, hyperlinks, graphics, audio, or video, publish the content online using a blogging application, and alert subscribers that new content is posted. Blogs may be informal and personal in nature or may serve as formal commentary in a specific genre, such as in politics or education (Coghlan et al., 2007 ). Fortunately, many blog applications are free, and many learning management systems (LMSs) offer a blogging feature that is seamlessly integrated into the online classroom. The ease of blogging has attracted attention from educators, who currently use blogs as an instructional tool for the expression of ideas, opinions, and experiences and for promoting dialogue on a wide range of academic topics (Garrity, Jones, VanderZwan, de la Rocha, & Epstein, 2014 ; Wang, 2008 ).
Studies on blogs show consistently positive findings for many of the behavioral and emotional engagement indicators. For example, students reported that blogs promoted interaction with others, through greater communication and information sharing with peers (Chu, Chan, & Tiwari, 2012 ; Ivala & Gachago, 2012 ; Mansouri & Piki, 2016 ), and analyses of blog posts show evidence of students elaborating on one another’s ideas and sharing experiences and conceptions of course content (Sharma & Tietjen, 2016 ). Blogs also contribute to emotional engagement by providing students with opportunities to express their feelings about learning and by encouraging positive attitudes about learning (Dos & Demir, 2013 ; Chu et al., 2012 ; Yang & Chang, 2012 ). For example, Dos and Demir ( 2013 ) found that students expressed prejudices and fears about specific course topics in their blog posts. In addition, Yang and Chang ( 2012 ) found that interactive blogging, where comment features were enabled, lead to more positive attitudes about course content and peers compared to solitary blogging, where comment features were disabled.
The literature on blogs and cognitive engagement is less consistent. Some studies suggest that blogs may help students engage in active learning, problem-solving, and reflection (Chawinga, 2017 ; Chu et al., 2012 ; Ivala & Gachago, 2012 ; Mansouri & Piki, 2016 ), while other studies suggest that students’ blog posts show very little evidence of higher-order thinking (Dos & Demir, 2013 ; Sharma & Tietjen, 2016 ). The inconsistency in findings may be due to the wording of blog instructions. Students may not necessarily demonstrate or engage in deep processing of information unless explicitly instructed to do so. Unfortunately, it is difficult to determine whether the wording of blog assignments contributed to the mixed results because many of the studies did not provide assignment details. However, studies pertaining to other technologies suggest that assignment wording that lacks specificity or requires low-level thinking can have detrimental effects on student engagement outcomes (Hou, Wang, Lin, & Chang, 2015 ; Prestridge, 2014 ). Therefore, blog assignments that are vague or require only low-level thinking may have adverse effects on cognitive engagement.
A wiki is a web page that can be edited by multiple users at once (Nakamaru, 2012 ). Wikis have gained popularity in educational settings as a viable tool for group projects where group members can work collaboratively to develop content (i.e., writings, hyperlinks, images, graphics, media) and keep track of revisions through an extensive versioning system (Roussinos & Jimoyiannis, 2013 ). Most studies on wikis pertain to behavioral engagement, with far fewer studies on cognitive engagement and none on emotional engagement. Studies pertaining to behavioral engagement reveal mixed results, with some showing very little enduring participation in wikis beyond the first few weeks of the course (Nakamaru, 2012 ; Salaber, 2014 ) and another showing active participation, as seen in high numbers of posts and edits (Roussinos & Jimoyiannis, 2013 ). The most notable difference between these studies is the presence of grading, which may account for the inconsistencies in findings. For example, in studies where participation was low, wikis were ungraded, suggesting that students may need extra motivation and encouragement to use wikis (Nakamaru, 2012 ; Salaber, 2014 ). Findings regarding the use of wikis for promoting interaction are also inconsistent. In some studies, students reported that wikis were useful for interaction, teamwork, collaboration, and group networking (Camacho, Carrión, Chayah, & Campos, 2016 ; Martínez, Medina, Albalat, & Rubió, 2013 ; Morely, 2012 ; Calabretto & Rao, 2011 ) and researchers found evidence of substantial collaboration among students (e.g., sharing ideas, opinions, and points of view) in wiki activity (Hewege & Perera, 2013 ); however, Miller, Norris, and Bookstaver ( 2012 ) found that only 58% of students reported that wikis promoted collegiality among peers. The findings in the latter study were unexpected and may be due to design flaws in the wiki assignments. For example, the authors noted that wiki assignments were not explicitly referred to in face-to-face classes; therefore, this disconnect may have prevented students from building on interactive momentum achieved during out-of-class wiki assignments (Miller et al., 2012 ).
Studies regarding cognitive engagement are limited in number but more consistent than those concerning behavioral engagement, suggesting that wikis promote high levels of knowledge construction (i.e., evaluation of arguments, the integration of multiple viewpoints, new understanding of course topics; Hewege & Perera, 2013 ), and are useful for reflection, reinforcing course content, and applying academic skills (Miller et al., 2012 ). Overall, there is mixed support for the use of wikis to promote behavioral engagement, although making wiki assignments mandatory and explicitly referring to wikis in class may help bolster participation and interaction. In addition, there is some support for using wikis to promote cognitive engagement, but additional studies are needed to confirm and expand on findings as well as explore the effect of wikis on emotional engagement.
Social networking sites
Social networking is “the practice of expanding knowledge by making connections with individuals of similar interests” (Gunawardena et al., 2009 , p. 4). Social networking sites, such as Facebook, Twitter, Instagram, and LinkedIn, allow users to create and share digital content publicly or with others to whom they are connected and communicate privately through messaging features. Two of the most popular social networking sites in the educational literature are Facebook and Twitter (Camus, Hurt, Larson, & Prevost, 2016 ; Manca & Ranieri, 2013 ), which is consistent with recent statistics suggesting that both sites also are exceedingly popular among the general population (Greenwood, Perrin, & Duggan, 2016 ). In the sections that follow, we examine how both Facebook and Twitter influence different types of student engagement.
Facebook is a web-based service that allows users to create a public or private profile and invite others to connect. Users may build social, academic, and professional connections by posting messages in various media formats (i.e., text, pictures, videos) and commenting on, liking, and reacting to others’ messages (Bowman & Akcaoglu, 2014 ; Maben, Edwards, & Malone, 2014 ; Hou et al., 2015 ). Within an educational context, Facebook has often been used as a supplementary instructional tool to lectures or LMSs to support class discussions or develop, deliver, and share academic content and resources. Many instructors have opted to create private Facebook groups, offering an added layer of security and privacy because groups are not accessible to strangers (Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Clements, 2015 ; Dougherty & Andercheck, 2014 ; Esteves, 2012 ; Shraim, 2014 ; Maben et al., 2014 ; Manca & Ranieri, 2013 ; Naghdipour & Eldridge, 2016 ; Rambe, 2012 ). The majority of studies on Facebook address behavioral indicators of student engagement, with far fewer focusing on emotional or cognitive engagement.
Studies that examine the influence of Facebook on behavioral engagement focus both on participation in learning activities and interaction with peers and instructors. In most studies, Facebook activities were voluntary and participation rates ranged from 16 to 95%, with an average of rate of 47% (Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Dougherty & Andercheck, 2014 ; Fagioli, Rios-Aguilar, & Deil-Amen, 2015 ; Rambe, 2012 ; Staines & Lauchs, 2013 ). Participation was assessed by tracking how many students joined course- or university-specific Facebook groups (Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Fagioli et al., 2015 ), visited or followed course-specific Facebook pages (DiVall & Kirwin, 2012 ; Staines & Lauchs, 2013 ), or posted at least once in a course-specific Facebook page (Rambe, 2012 ). The lowest levels of participation (16%) arose from a study where community college students were invited to use the Schools App, a free application that connects students to their university’s private Facebook community. While the authors acknowledged that building an online community of college students is difficult (Fagioli et al., 2015 ), downloading the Schools App may have been a deterrent to widespread participation. In addition, use of the app was not tied to any specific courses or assignments; therefore, students may have lacked adequate incentive to use it. The highest level of participation (95%) in the literature arose from a study in which the instructor created a Facebook page where students could find or post study tips or ask questions. Followership to the page was highest around exams, when students likely had stronger motivations to access study tips and ask the instructor questions (DiVall & Kirwin, 2012 ). The wide range of participation in Facebook activities suggests that some students may be intrinsically motivated to participate, while other students may need some external encouragement. For example, Bahati ( 2015 ) found that when students assumed that a course-specific Facebook was voluntary, only 23% participated, but when the instructor confirmed that the Facebook group was, in fact, mandatory, the level of participation rose to 94%.
While voluntary participation in Facebook activities may be lower than desired or expected (Dyson, Vickers, Turtle, Cowan, & Tassone, 2015 ; Fagioli et al., 2015 ; Naghdipour & Eldridge, 2016 ; Rambe, 2012 ), students seem to have a clear preference for Facebook compared to other instructional tools (Clements, 2015 ; DiVall & Kirwin, 2012 ; Hurt et al., 2012 ; Hou et al., 2015 ; Kent, 2013 ). For example, in one study where an instructor shared course-related information in a Facebook group, in the LMS, and through email, the level of participation in the Facebook group was ten times higher than in email or the LMS (Clements, 2015 ). In other studies, class discussions held in Facebook resulted in greater levels of participation and dialogue than class discussions held in LMS discussion forums (Camus et al., 2016 ; Hurt et al., 2012 ; Kent, 2013 ). Researchers found that preference for Facebook over the university’s LMS is due to perceptions that the LMS is outdated and unorganized and reports that Facebook is more familiar, convenient, and accessible given that many students already visit the social networking site multiple times per day (Clements, 2015 ; Dougherty & Andercheck, 2014 ; Hurt et al., 2012 ; Kent, 2013 ). In addition, students report that Facebook helps them stay engaged in learning through collaboration and interaction with both peers and instructors (Bahati, 2015 ; Shraim, 2014 ), which is evident in Facebook posts where students collaborated to study for exams, consulted on technical and theoretical problem solving, discussed course content, exchanged learning resources, and expressed opinions as well as academic successes and challenges (Bowman & Akcaoglu, 2014 ; Dougherty & Andercheck, 2014 ; Esteves, 2012 Ivala & Gachago, 2012 ; Maben et al., 2014 ; Rambe, 2012 ; van Beynen & Swenson, 2016 ).
There is far less evidence in the literature about the use of Facebook for emotional and cognitive engagement. In terms of emotional engagement, studies suggest that students feel positively about being part of a course-specific Facebook group and that Facebook is useful for expressing feelings about learning and concerns for peers, through features such as the “like” button and emoticons (Bowman & Akcaoglu, 2014 ; Dougherty & Andercheck, 2014 ; Naghdipour & Eldridge, 2016 ). In addition, being involved in a course-specific Facebook group was positively related to students’ sense of belonging in the course (Dougherty & Andercheck, 2014 ). The research on cognitive engagement is less conclusive, with some studies suggesting that Facebook participation is related to academic persistence (Fagioli et al., 2015 ) and self-regulation (Dougherty & Andercheck, 2014 ) while other studies show low levels of knowledge construction in Facebook posts (Hou et al., 2015 ), particularly when compared to discussions held in the LMS. One possible reason may be because the LMS is associated with formal, academic interactions while Facebook is associated with informal, social interactions (Camus et al., 2016 ). While additional research is needed to confirm the efficacy of Facebook for promoting cognitive engagement, studies suggest that Facebook may be a viable tool for increasing specific behavioral and emotional engagement indicators, such as interactions with others and a sense of belonging within a learning community.
Twitter is a web-based service where subscribers can post short messages, called tweets, in real-time that are no longer than 140 characters in length. Tweets may contain hyperlinks to other websites, images, graphics, and/or videos and may be tagged by topic using the hashtag symbol before the designated label (e.g., #elearning). Twitter subscribers may “follow” other users and gain access to their tweets and also may “retweet” messages that have already been posted (Hennessy, Kirkpatrick, Smith, & Border, 2016 ; Osgerby & Rush, 2015 ; Prestridge, 2014 ; West, Moore, & Barry, 2015 ; Tiernan, 2014 ;). Instructors may use Twitter to post updates about the course, clarify expectations, direct students to additional learning materials, and encourage students to discuss course content (Bista, 2015 ; Williams & Whiting, 2016 ). Several of the studies on the use of Twitter included broad, all-encompassing measures of student engagement and produced mixed findings. For example, some studies suggest that Twitter increases student engagement (Evans, 2014 ; Gagnon, 2015 ; Junco, Heibergert, & Loken, 2011 ) while other studies suggest that Twitter has little to no influence on student engagement (Junco, Elavsky, & Heiberger, 2013 ; McKay, Sanko, Shekhter, & Birnbach, 2014 ). In both studies suggesting little to no influence on student engagement, Twitter use was voluntary and in one of the studies faculty involvement in Twitter was low, which may account for the negative findings (Junco et al., 2013 ; McKay et al., 2014 ). Conversely, in the studies that show positive findings, Twitter use was mandatory and often directly integrated with required assignments (Evans, 2014 ; Gagnon, 2015 ; Junco et al., 2011 ). Therefore, making Twitter use mandatory, increasing faculty involvement in Twitter, and integrating Twitter into assignments may help to increase student engagement.
Studies pertaining to specific behavioral student engagement indicators also reveal mixed findings. For example, in studies where course-related Twitter use was voluntary, 45-91% of students reported using Twitter during the term (Hennessy et al., 2016 ; Junco et al., 2013 ; Ross, Banow, & Yu, 2015 ; Tiernan, 2014 ; Williams & Whiting, 2016 ), but only 30-36% reported making contributions to the course-specific Twitter page (Hennessy et al., 2016 ; Tiernan, 2014 ; Ross et al., 2015 ; Williams & Whiting, 2016 ). The study that reported a 91% participation rate was unique because the course-specific Twitter page was accessible via a public link. Therefore, students who chose only to view the content (58%), rather than contribute to the page, did not have to create a Twitter account (Hennessy et al., 2016 ). The convenience of not having to create an account may be one reason for much higher participation rates. In terms of low participation rates, a lack of literacy, familiarity, and interest in Twitter , as well as a preference for Facebook , are cited as contributing factors (Bista, 2015 ; McKay et al., 2014 ; Mysko & Delgaty, 2015 ; Osgerby & Rush, 2015 ; Tiernan, 2014 ). However, when the use of Twitter was required and integrated into class discussions, the participation rate was 100% (Gagnon, 2015 ). Similarly, 46% of students in one study indicated that they would have been more motivated to participate in Twitter activities if they were graded (Osgerby & Rush, 2015 ), again confirming the power of extrinsic motivating factors.
Studies also show mixed results for the use of Twitter to promote interactions with peers and instructors. Researchers found that when instructors used Twitter to post updates about the course, ask and answer questions, and encourage students to tweet about course content, there was evidence of student-student and student-instructor interactions in tweets (Hennessy et al., 2016 ; Tiernan, 2014 ). Some students echoed these findings, suggesting that Twitter is useful for sharing ideas and resources, discussing course content, asking the instructor questions, and networking (Chawinga, 2017 ; Evans, 2014 ; Gagnon, 2015 ; Hennessy et al., 2016 ; Mysko & Delgaty, 2015 ; West et al., 2015 ) and is preferable over speaking aloud in class because it is more comfortable, less threatening, and more concise due to the 140 character limit (Gagnon, 2015 ; Mysko & Delgaty, 2015 ; Tiernan, 2014 ). Conversely, other students reported that Twitter was not useful for improving interaction because they viewed it predominately for social, rather than academic, interactions and they found the 140 character limit to be frustrating and restrictive. A theme among the latter studies was that a large proportion of the sample had never used Twitter before (Bista, 2015 ; McKay et al., 2014 ; Osgerby & Rush, 2015 ), which may have contributed to negative perceptions.
The literature on the use of Twitter for cognitive and emotional engagement is minimal but nonetheless promising in terms of promoting knowledge gains, the practical application of content, and a sense of belonging among users. For example, using Twitter to respond to questions that arose in lectures and tweet about course content throughout the term is associated with increased understanding of course content and application of knowledge (Kim et al., 2015 ; Tiernan, 2014 ; West et al., 2015 ). While the underlying mechanisms pertaining to why Twitter promotes an understanding of content and application of knowledge are not entirely clear, Tiernan ( 2014 ) suggests that one possible reason may be that Twitter helps to break down communication barriers, encouraging shy or timid students to participate in discussions that ultimately are richer in dialogue and debate. In terms of emotional engagement, students who participated in a large, class-specific Twitter page were more likely to feel a sense of community and belonging compared to those who did not participate because they could more easily find support from and share resources with other Twitter users (Ross et al., 2015 ). Despite the positive findings about the use of Twitter for cognitive and emotional engagement, more studies are needed to confirm existing results regarding behavioral engagement and target additional engagement indicators such as motivation, persistence, and attitudes, interests, and values about learning. In addition, given the strong negative perceptions of Twitter that still exist, additional studies are needed to confirm Twitter ’s efficacy for promoting different types of behavioral engagement among both novice and experienced Twitter users, particularly when compared to more familiar tools such as Facebook or LMS discussion forums.
- Digital games
Digital games are “applications using the characteristics of video and computer games to create engaging and immersive learning experiences for delivery of specified learning goals, outcomes and experiences” (de Freitas, 2006 , p. 9). Digital games often serve the dual purpose of promoting the achievement of learning outcomes while making learning fun by providing simulations of real-world scenarios as well as role play, problem-solving, and drill and repeat activities (Boyle et al., 2016 ; Connolly, Boyle, MacArthur, Hainey, & Boyle, 2012 ; Scarlet & Ampolos, 2013 ; Whitton, 2011 ). In addition, gamified elements, such as digital badges and leaderboards, may be integrated into instruction to provide additional motivation for completing assigned readings and other learning activities (Armier, Shepherd, & Skrabut, 2016 ; Hew, Huang, Chu, & Chiu, 2016 ). The pedagogical benefits of digital games are somewhat distinct from the other technologies addressed in this review, which are designed primarily for social interaction. While digital games may be played in teams or allow one player to compete against another, the focus of their design often is on providing opportunities for students to interact with academic content in a virtual environment through decision-making, problem-solving, and reward mechanisms. For example, a digital game may require students to adopt a role as CEO in a computer-simulated business environment, make decisions about a series of organizational issues, and respond to the consequences of those decisions. In this example and others, digital games use adaptive learning principles, where the learning environment is re-configured or modified in response to the actions and needs of students (Bower, 2016 ). Most of the studies on digital games focused on cognitive and emotional indicators of student engagement, in contrast to the previous technologies addressed in this review which primarily focused on behavioral indicators of engagement.
Existing studies provide support for the influence of digital games on cognitive engagement, through achieving a greater understanding of course content and demonstrating higher-order thinking skills (Beckem & Watkins, 2012 ; Farley, 2013 ; Ke, Xie, & Xie, 2016 ; Marriott, Tan, & Marriott, 2015 ), particularly when compared to traditional instructional methods, such as giving lectures or assigning textbook readings (Lu, Hallinger, & Showanasai, 2014 ; Siddique, Ling, Roberson, Xu, & Geng, 2013 ; Zimmermann, 2013 ). For example, in a study comparing courses that offered computer simulations of business challenges (e.g, implementing a new information technology system, managing a startup company, and managing a brand of medicine in a simulated market environment) and courses that did not, students in simulation-based courses reported higher levels of action-directed learning (i.e., connecting theory to practice in a business context) than students in traditional, non-simulation-based courses (Lu et al., 2014 ). Similarly, engineering students who participated in a car simulator game, which was designed to help students apply and reinforce the knowledge gained from lectures, demonstrated higher levels of critical thinking (i.e., analysis, evaluation) on a quiz than students who only attended lectures (Siddique et al., 2013 ).
Motivation is another cognitive engagement indicator that is linked to digital games (Armier et al., 2016 ; Chang & Wei, 2016 ; Dichev & Dicheva, 2017 ; Grimley, Green, Nilsen, & Thompson, 2012 ; Hew et al., 2016 ; Ibáñez, Di-Serio, & Delgado-Kloos, 2014 ; Ke et al., 2016 ; Liu, Cheng, & Huang, 2011 ; Nadolny & Halabi, 2016 ). Researchers found that incorporating gamified elements into courses, such as giving students digital rewards (e.g., redeemable points, trophies, and badges) for participating in learning activities or creating competition through the use of leaderboards where students can see how they rank against other students positively affects student motivation to complete learning tasks (Armier et al., 2016 ; Chang & Wei, 2016 ; Hew et al., 2016 ; Nadolny & Halabi, 2016 ). In addition, students who participated in gamified elements, such as trying to earn digital badges, were more motivated to complete particularly difficult learning activities (Hew et al., 2016 ) and showed persistence in exceeding learning requirements (Ibáñez et al., 2014 ). Research on emotional engagement may help to explain these findings. Studies suggest that digital games positively affect student attitudes about learning, evident in student reports that games are fun, interesting, and enjoyable (Beckem & Watkins, 2012 ; Farley, 2013 ; Grimley et al., 2012 ; Hew et al., 2016 ; Liu et al., 2011 ; Zimmermann, 2013 ), which may account for higher levels of student motivation in courses that offered digital games.
Research on digital games and behavioral engagement is more limited, with only one study suggesting that games lead to greater participation in educational activities (Hew et al., 2016 ). Therefore, more research is needed to explore how digital games may influence behavioral engagement. In addition, research is needed to determine whether the underlying technology associated with digital games (e.g., computer-based simulations and virtual realities) produce positive engagement outcomes or whether common mechanisms associated with both digital and non-digital games (e.g., role play, rewards, and competition) account for those outcomes. For example, studies in which non-digital, face-to-face games were used also showed positive effects on student engagement (Antunes, Pacheco, & Giovanela, 2012 ; Auman, 2011 ; Coffey, Miller, & Feuerstein, 2011 ; Crocco, Offenholley, & Hernandez, 2016 ; Poole, Kemp, Williams, & Patterson, 2014 ; Scarlet & Ampolos, 2013 ); therefore, it is unclear if and how digitizing games contributes to student engagement.
Discussion and implications
Student engagement is linked to a number of academic outcomes, such as retention, grade point average, and graduation rates (Carini et al., 2006 ; Center for Postsecondary Research, 2016 ; Hu & McCormick, 2012 ). As a result, universities have shown a strong interest in how to increase student engagement, particularly given rising external pressures to improve learning outcomes and prepare students for academic success (Axelson & Flick, 2011 ; Kuh, 2009 ). There are various models of student engagement that identify factors that influence student engagement (Kahu, 2013 ; Lam et al., 2012 ; Nora et al., 2005 ; Wimpenny & Savin-Baden, 2013 ; Zepke & Leach, 2010 ); however, none include the overt role of technology despite the growing trend and student demands to integrate technology into the learning experience (Amirault, 2012 ; Cook & Sonnenberg, 2014 ; Revere & Kovach, 2011 ; Sun & Chen, 2016 ; Westera, 2015 ). Therefore, the primary purpose of our literature review was to explore whether technology influences student engagement. The secondary purpose was to address skepticism and uncertainty about pedagogical benefits of technology (Ashrafzadeh & Sayadian, 2015 ; Kopcha et al., 2016 ; Reid, 2014 ) by reviewing the literature regarding the efficacy of specific technologies (i.e., web-conferencing software, blogs, wikis, social networking sites, and digital games) for promoting student engagement and offering recommendations for effective implementation, which are included at the end of this paper. In the sections that follow, we provide an overview of the findings, an explanation of existing methodological limitations and areas for future research, and a list of best practices for integrating the technologies we reviewed into the teaching and learning process.
Summary of findings
Findings from our literature review provide preliminary support for including technology as a factor that influences student engagement in existing models (Table 1 ). One overarching theme is that most of the technologies we reviewed had a positive influence on multiple indicators of student engagement, which may lead to a larger return on investment in terms of learning outcomes. For example, digital games influence all three types of student engagement and six of the seven indicators we identified, surpassing the other technologies in this review. There were several key differences in the design and pedagogical use between digital games and other technologies that may explain these findings. First, digital games were designed to provide authentic learning contexts in which students could practice skills and apply learning (Beckem & Watkins, 2012 ; Farley, 2013 ; Grimley et al., 2012 ; Ke et al., 2016 ; Liu et al., 2011 ; Lu et al., 2014 ; Marriott et al., 2015 ; Siddique et al., 2013 ), which is consistent with experiential learning and adult learning theories. Experiential learning theory suggests that learning occurs through interaction with one’s environment (Kolb, 2014 ) while adult learning theory suggests that adult learners want to be actively involved in the learning process and be able apply learning to real life situations and problems (Cercone, 2008 ). Second, students reported that digital games (and gamified elements) are fun, enjoyable, and interesting (Beckem & Watkins, 2012 ; Farley, 2013 ; Grimley et al., 2012 ; Hew et al., 2016 ; Liu et al., 2011 ; Zimmermann, 2013 ), feelings that are associated with a flow-like state where one is completely immersed in and engaged with the activity (Csikszentmihalyi, 1988 ; Weibel, Wissmath, Habegger, Steiner, & Groner, 2008 ). Third, digital games were closely integrated into the curriculum as required activities (Farley, 2013 ; Grimley et al., 2012 , Ke et al., 2016 ; Liu et al., 2011 ; Marriott et al., 2015 ; Siddique et al., 2013 ) as opposed to wikis, Facebook , and Twitter , which were often voluntary and used to supplement lectures (Dougherty & Andercheck, 2014 Nakamaru, 2012 ; Prestridge, 2014 ; Rambe, 2012 ).
Web-conferencing software and Facebook also yielded the most positive findings, influencing four of the seven indicators of student engagement, compared to other collaborative technologies, such as blogs, wikis, and Twitter . Web-conferencing software was unique due to the sheer number of collaborative features it offers, providing multiple ways for students to actively engage with course content (screen sharing, whiteboards, digital pens) and interact with peers and the instructor (audio, video, text chats, breakout rooms) (Bower, 2011 ; Hudson et al., 2012 ; Martin et al., 2012 ; McBrien et al., 2009 ); this may account for the effects on multiple indicators of student engagement. Positive findings regarding Facebook ’s influence on student engagement could be explained by a strong familiarity and preference for the social networking site (Clements, 2015 ; DiVall & Kirwin, 2012 ; Hurt et al., 2012 ; Hou et al., 2015 ; Kent, 2013 ; Manca & Ranieri, 2013 ), compared to Twitter which was less familiar or interesting to students (Bista, 2015 ; McKay et al., 2014 ; Mysko & Delgaty, 2015 ; Osgerby & Rush, 2015 ; Tiernan, 2014 ). Wikis had the lowest influence on student engagement, with mixed findings regarding behavioral engagement, limited, but conclusive findings, regarding one indicator of cognitive engagement (deep processing of information), and no studies pertaining to other indicators of cognitive engagement (motivation, persistence) or emotional engagement.
Another theme that arose was the prevalence of mixed findings across multiple technologies regarding behavioral engagement. Overall, the vast majority of studies addressed behavioral engagement, and we expected that technologies designed specifically for social interaction, such as web-conferencing, wikis, and social networking sites, would yield more conclusive findings. However, one possible reason for the mixed findings may be that the technologies were voluntary in many studies, resulting in lower than desired participation rates and missed opportunities for interaction (Armstrong & Thornton, 2012 ; Fagioli et al., 2015 ; Nakamaru, 2012 ; Rambe, 2012 ; Ross et al., 2015 ; Williams & Whiting, 2016 ), and mandatory in a few studies, yielding higher levels of participation and interaction (Bahati, 2015 ; Gagnon, 2015 ; Roussinos & Jimoyiannis, 2013 ). Another possible reason for the mixed findings is that measures of variables differed across studies. For example, in some studies participation meant that a student signed up for a Twitter account (Tiernan, 2014 ), used the Twitter account for class (Williams & Whiting, 2016 ), or viewed the course-specific Twitter page (Hennessy et al., 2016 ). The pedagogical uses of the technologies also varied considerably across studies, making it difficult to make comparisons. For example, Facebook was used in studies to share learning materials (Clements, 2015 ; Dyson et al., 2015 ), answer student questions about academic content or administrative issues (Rambe, 2012 ), prepare for upcoming exams and share study tips (Bowman & Akcaoglu, 2014 ; DiVall & Kirwin, 2012 ), complete group work (Hou et al., 2015 ; Staines & Lauchs, 2013 ), and discuss course content (Camus et al., 2016 ; Kent, 2013 ; Hurt et al., 2012 ). Finally, cognitive indicators (motivation and persistence) drew the fewest amount of studies, which suggests that research is needed to determine whether technologies affect these indicators.
Methodological limitations
While there appears to be preliminary support for the use of many of the technologies to promote student engagement, there are significant methodological limitations in the literature and, as a result, findings should be interpreted with caution. First, many studies used small sample sizes and were limited to one course, one degree level, and one university. Therefore, generalizability is limited. Second, very few studies used experimental or quasi-experimental designs; therefore, very little evidence exists to substantiate a cause and effect relationship between technologies and student engagement indicators. In addition, in many studies that did use experimental or quasi-experimental designs, participants were not randomized; rather, participants who volunteered to use a specific technology were compared to those who chose not to use the technology. As a result, there is a possibility that fundamental differences between users and non-users could have affected the engagement results. Furthermore, many of the studies did not isolate specific technological features (e.g, using only the breakout rooms for group work in web-conferencing software, rather than using the chat feature, screen sharing, and breakout rooms for group work). Using multiple features at once could have conflated student engagement results. Third, many studies relied on one source to measure technological and engagement variables (single source bias), such as self-report data (i.e., reported usage of technology and perceptions of student engagement), which may have affected the validity of the results. Fourth, many studies were conducted during a very brief timeframe, such as one academic term. As a result, positive student engagement findings may be attributed to a “novelty effect” (Dichev & Dicheva, 2017 ) associated with using a new technology. Finally, many studies lack adequate details about learning activities, raising questions about whether poor instructional design may have adversely affected results. For example, an instructor may intend to elicit higher-order thinking from students, but if learning activity instructions are written using low-level verbs, such as identify, describe, and summarize, students will be less likely to engage in higher-order thinking.
Areas for future research
The findings of our literature review suggest that the influence of technology on student engagement is still a developing area of knowledge that requires additional research to build on promising, but limited, evidence, clarify mixed findings, and address several gaps in the literature. As such, our recommendations for future areas of research are as follows:
Examine the effect of collaborative technologies (i.e., web-conferencing, blogs, wikis, social networking sites ) on emotional and cognitive student engagement. There are significant gaps in the literature regarding whether these technologies affect attitudes, interests, and values about learning; a sense of belonging within a learning community; motivation to learn; and persistence to overcome academic challenges and meet or exceed requirements.
Clarify mixed findings, particularly regarding how web-conferencing software, wikis, and Facebook and Twitter affect participation in learning activities. Researchers should make considerable efforts to gain consensus or increase consistency on how participation is measured (e.g., visited Facebook group or contributed one post a week) in order to make meaningful comparisons and draw conclusions about the efficacy of various technologies for promoting behavioral engagement. In addition, further research is needed to clarify findings regarding how wikis and Twitter influence interaction and how blogs and Facebook influence deep processing of information. Future research studies should include justifications for the pedagogical use of specific technologies and detailed instructions for learning activities to minimize adverse findings from poor instructional design and to encourage replication.
Conduct longitudinal studies over several academic terms and across multiple academic disciplines, degree levels, and institutions to determine long-term effects of specific technologies on student engagement and to increase generalizability of findings. Also, future studies should take individual factors into account, such as gender, age, and prior experience with the technology. Studies suggest that a lack of prior experience or familiarity with Twitter was a barrier to Twitter use in educational settings (Bista, 2015 , Mysko & Delgaty, 2015 , Tiernan, 2014 ); therefore, future studies should take prior experience into account.
Compare student engagement outcomes between and among different technologies and non-technologies. For example, studies suggest that students prefer Facebook over Twitter (Bista, 2015 ; Osgerby & Rush, 2015 ), but there were no studies that compared these technologies for promoting student engagement. Also, studies are needed to isolate and compare different features within the same technology to determine which might be most effective for increasing engagement. Finally, studies on digital games (Beckem & Watkins, 2012 ; Grimley et al., 2012 ; Ke et al., 2016 ; Lu et al., 2014 ; Marriott et al., 2015 ; Siddique et al., 2013 ) and face-to-face games (Antunes et al., 2012 ; Auman, 2011 ; Coffey et al., 2011 ; Crocco et al., 2016 ; Poole et al., 2014 ; Scarlet & Ampolos, 2013 ) show similar, positive effects on student engagement, therefore, additional research is needed to determine the degree to which the delivery method (i.e.., digital versus face-to-face) accounts for positive gains in student engagement.
Determine whether other technologies not included in this review influence student engagement. Facebook and Twitter regularly appear in the literature regarding social networking, but it is unclear how other popular social networking sites, such as LinkedIn, Instagram, and Flickr, influence student engagement. Future research should focus on the efficacy of these and other popular social networking sites for promoting student engagement. In addition, there were very few studies about whether informational technologies, which involve the one-way transmission of information to students, affect different types of student engagement. Future research should examine whether informational technologies, such as video lectures, podcasts, and pre-recorded narrated Power Point presentations or screen casts, affect student engagement. Finally, studies should examine the influence of mobile software and technologies, such as educational apps or smartphones, on student engagement.
Achieve greater consensus on the meaning of student engagement and its distinction from similar concepts in the literature, such as social and cognitive presence (Garrison & Arbaugh, 2007 )
Recommendations for practice
Despite the existing gaps and mixed findings in the literature, we were able to compile a list of recommendations for when and how to use technology to increase the likelihood of promoting student engagement. What follows is not an exhaustive list; rather, it is a synthesis of both research findings and lessons learned from the studies we reviewed. There may be other recommendations to add to this list; however, our intent is to provide some useful information to help address barriers to technology integration among faculty who feel uncertain or unprepared to use technology (Ashrafzadeh & Sayadian, 2015 ; Hauptman, 2015 ; Kidd et al., 2016 ; Reid, 2014 ) and to add to the body of practical knowledge in instructional design and delivery. Our recommendations for practice are as follows:
Consider context before selecting technologies. Contextual factors such as existing technological infrastructure and requirements, program and course characteristics, and the intended audience will help determine which technologies, if any, are most appropriate (Bullen & Morgan, 2011 ; Bullen, Morgan, & Qayyum, 2011 ). For example, requiring students to use a blog that is not well integrated with the existing LMS may prove too frustrating for both the instructor and students. Similarly, integrating Facebook- and Twitter- based learning activities throughout a marketing program may be more appropriate, given the subject matter, compared to doing so in an engineering or accounting program where social media is less integral to the profession. Finally, do not assume that students appreciate or are familiar with all technologies. For example, students who did not already have Facebook or Twitter accounts were less likely to use either for learning purposes and perceived setting up an account to be an increase in workload (Bista, 2015 , Clements, 2015 ; DiVall & Kirwin, 2012 ; Hennessy et al., 2016 ; Mysko & Delgaty, 2015 , Tiernan, 2014 ). Therefore, prior to using any technology, instructors may want to determine how many students already have accounts and/or are familiar with the technology.
Carefully select technologies based on their strengths and limitations and the intended learning outcome. For example, Twitter is limited to 140 characters, making it a viable tool for learning activities that require brevity. In one study, an instructor used Twitter for short pop quizzes during lectures, where the first few students to tweet the correct answer received additional points (Kim et al., 2015 ), which helped students practice applying knowledge. In addition, studies show that students perceive Twitter and Facebook to be primarily for social interactions (Camus et al., 2016 ; Ross et al., 2015 ), which may make these technologies viable tools for sharing resources, giving brief opinions about news stories pertaining to course content, or having casual conversations with classmates rather than full-fledged scholarly discourse.
Incentivize students to use technology, either by assigning regular grades or giving extra credit. The average participation rates in voluntary web-conferencing, Facebook , and Twitter learning activities in studies we reviewed was 52% (Andrew et al., 2015 ; Armstrong & Thornton, 2012 ; Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Divall & Kirwin, 2012 ; Dougherty & Andercheck, 2014 ; Fagioli et al., 2015 ; Hennessy et al., 2016 ; Junco et al., 2013 ; Rambe, 2012 ; Ross et al., 2015 ; Staines & Lauchs, 2013 ; Tiernan, 2014 ; Williams & Whiting, 2016 ). While there were far fewer studies on the use of technology for graded or mandatory learning activities, the average participation rate reported in those studies was 97% (Bahati2015; Gagnon, 2015 ), suggesting that grading may be a key factor in ensuring students participate.
Communicate clear guidelines for technology use. Prior to the implementation of technology in a course, students may benefit from an overview the technology, including its navigational features, privacy settings, and security (Andrew et al., 2015 ; Hurt et al., 2012 ; Martin et al., 2012 ) and a set of guidelines for how to use the technology effectively and professionally within an educational setting (Miller et al., 2012 ; Prestridge, 2014 ; Staines & Lauchs, 2013 ; West et al., 2015 ). In addition, giving students examples of exemplary and poor entries and posts may also help to clarify how they are expected to use the technology (Shraim, 2014 ; Roussinos & Jimoyiannis, 2013 ). Also, if instructors expect students to use technology to demonstrate higher-order thinking or to interact with peers, there should be explicit instructions to do so. For example, Prestridge ( 2014 ) found that students used Twitter to ask the instructor questions but very few interacted with peers because they were not explicitly asked to do so. Similarly, Hou et al., 2015 reported low levels of knowledge construction in Facebook , admitting that the wording of the learning activity (e.g., explore and present applications of computer networking) and the lack of probing questions in the instructions may have been to blame.
Use technology to provide authentic and integrated learning experiences. In many studies, instructors used digital games to simulate authentic environments in which students could apply new knowledge and skills, which ultimately lead to a greater understanding of content and evidence of higher-order thinking (Beckem & Watkins, 2012 ; Liu et al., 2011 ; Lu et al., 2014 ; Marriott et al., 2015 ; Siddique et al., 2013 ). For example, in one study, students were required to play the role of a stock trader in a simulated trading environment and they reported that the simulation helped them engage in critical reflection, enabling them to identify their mistakes and weaknesses in their trading approaches and strategies (Marriott et al., 2015 ). In addition, integrating technology into regularly-scheduled classroom activities, such as lectures, may help to promote student engagement. For example, in one study, the instructor posed a question in class, asked students to respond aloud or tweet their response, and projected the Twitter page so that everyone could see the tweets in class, which lead to favorable comments about the usefulness of Twitter to promote engagement (Tiernan, 2014 ).
Actively participate in using the technologies assigned to students during the first few weeks of the course to generate interest (Dougherty & Andercheck, 2014 ; West et al., 2015 ) and, preferably, throughout the course to answer questions, encourage dialogue, correct misconceptions, and address inappropriate behavior (Bowman & Akcaoglu, 2014 ; Hennessy et al., 2016 ; Junco et al., 2013 ; Roussinos & Jimoyiannis, 2013 ). Miller et al. ( 2012 ) found that faculty encouragement and prompting was associated with increases in students’ expression of ideas and the degree to which they edited and elaborated on their peers’ work in a course-specific wiki.
Be mindful of privacy, security, and accessibility issues. In many studies, instructors took necessary steps to help ensure privacy and security by creating closed Facebook groups and private Twitter pages, accessible only to students in the course (Bahati, 2015 ; Bista, 2015 ; Bowman & Akcaoglu, 2014 ; Esteves, 2012 ; Rambe, 2012 ; Tiernan, 2014 ; Williams & Whiting, 2016 ) and by offering training to students on how to use privacy and security settings (Hurt et al., 2012 ). Instructors also made efforts to increase accessibility of web-conferencing software by including a phone number for students unable to access audio or video through their computer and by recording and archiving sessions for students unable to attend due to pre-existing conflicts (Andrew et al., 2015 ; Martin et al., 2012 ). In the future, instructors should also keep in mind that some technologies, like Facebook and Twitter , are not accessible to students living in China; therefore, alternative arrangements may need to be made.
In 1985, Steve Jobs predicted that computers and software would revolutionize the way we learn. Over 30 years later, his prediction has yet to be fully confirmed in the student engagement literature; however, our findings offer preliminary evidence that the potential is there. Of the technologies we reviewed, digital games, web-conferencing software, and Facebook had the most far-reaching effects across multiple types and indicators of student engagement, suggesting that technology should be considered a factor that influences student engagement in existing models. Findings regarding blogs, wikis, and Twitter, however, are less convincing, given a lack of studies in relation to engagement indicators or mixed findings. Significant methodological limitations may account for the wide range of findings in the literature. For example, small sample sizes, inconsistent measurement of variables, lack of comparison groups, and missing details about specific, pedagogical uses of technologies threaten the validity and reliability of findings. Therefore, more rigorous and robust research is needed to confirm and build upon limited but positive findings, clarify mixed findings, and address gaps particularly regarding how different technologies influence emotional and cognitive indicators of engagement.
Abbreviations
Learning management system
Amirault, R. J. (2012). Distance learning in the 21 st century university. Quarterly Review of Distance Education, 13 (4), 253–265.
Google Scholar
Anderson, M. (2016). More Americans using smartphones for getting directions, streaming TV . Washington, D.C.: Pew Research Center Retrieved from http://www.pewresearch.org/fact-tank/2016/01/29/us-smartphone-use/ .
Anderson, M., & Horrigan, J. B. (2016). Smartphones help those without broadband get online, but don’t necessary bridge the digital divide . Washington, D.C.: Pew Research Center Retrieved from http://www.pewresearch.org/fact-tank/2016/10/03/smartphones-help-those-without-broadband-get-online-but-dont-necessarily-bridge-the-digital-divide/ .
Andrew, L., Maslin-Prothero, S., & Ewens, B. (2015). Enhancing the online learning experience using virtual interactive classrooms. Australian Journal of Advanced Nursing, 32 (4), 22–31.
Antunes, M., Pacheco, M. R., & Giovanela, M. (2012). Design and implementation of an educational game for teaching chemistry in higher education. Journal of Chemical Education, 89 (4), 517–521. doi: 10.1021/ed2003077 .
Article Google Scholar
Armier, D. J., Shepherd, C. E., & Skrabut, S. (2016). Using game elements to increase student engagement in course assignments. College Teaching, 64 (2), 64–72 https://doi.org/10.1080/87567555.2015.1094439 .
Armstrong, A., & Thornton, N. (2012). Incorporating Brookfield’s discussion techniques synchronously into asynchronous online courses. Quarterly Review of Distance Education, 13 (1), 1–9.
Ashrafzadeh, A., & Sayadian, S. (2015). University instructors’ concerns and perceptions of technology integration. Computers in Human Behavior, 49 , 62–73. doi: 10.1016/j.chb.2015.01.071 .
Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25 (4), 297–308.
Auman, C. (2011). Using simulation games to increase student and instructor engagement. College Teaching, 59 (4), 154–161. doi: 10.1080/87567555 .
Axelson, R. D., & Flick, A. (2011). Defining student engagement. Change: The magazine of higher learning, 43 (1), 38–43.
Bahati, B. (2015). Extending student discussions beyond lecture room walls via Facebook. Journal of Education and Practice, 6 (15), 160–171.
Bakker, A. B., Vergel, A. I. S., & Kuntze, J. (2015). Student engagement and performance: A weekly diary study on the role of openness. Motivation and Emotion, 39 (1), 49–62. doi: 10.1007/s11031-014-9422-5 .
Beckem, J. I., & Watkins, M. (2012). Bringing life to learning: Immersive experiential learning simulations for online and blended courses. Journal if Asynchronous Learning Networks, 16 (5), 61–70 https://doi.org/10.24059/olj.v16i5.287 .
Bista, K. (2015). Is Twitter an effective pedagogical tool in higher education? Perspectives of education graduate students. Journal of the Scholarship Of Teaching And Learning, 15 (2), 83–102 https://doi.org/10.14434/josotl.v15i2.12825 .
Boghossian, P. (2006). Behaviorism, constructivism, and Socratic pedagogy. Educational Philosophy and Theory, 38 (6), 713–722 https://doi.org/10.1111/j.1469-5812.2006.00226.x .
Bower, M. (2011). Redesigning a web-conferencing environment to scaffold computing students’ creative design processes. Journal of Educational Technology & Society, 14 (1), 27–42.
MathSciNet Google Scholar
Bower, M. (2016). A framework for adaptive learning design in a Web-conferencing environment. Journal of Interactive Media in Education, 2016 (1), 11 http://doi.org/10.5334/jime.406 .
Article MathSciNet Google Scholar
Bowman, N. D., & Akcaoglu, M. (2014). “I see smart people!”: Using Facebook to supplement cognitive and affective learning in the university mass lecture. The Internet and Higher Education, 23 , 1–8. doi: 10.1016/j.iheduc.2014.05.003 .
Boyle, E. A., Hainey, T., Connolly, T. M., Gray, G., Earp, J., Ott, M., et al. (2016). An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Computers & Education, 94 , 178–192. doi: 10.1016/j.compedu.2015.11.003 .
Bryson, C., & Hand, L. (2007). The role of engagement in inspiring teaching and learning. Innovations in Education and Teaching International, 44 (4), 349–362. doi: 10.1080/14703290701602748 .
Buchanan, T., Sainter, P., & Saunders, G. (2013). Factors affecting faculty use of learning technologies: Implications for models of technology adoption. Journal of Computer in Higher Education, 25 (1), 1–11.
Bullen, M., & Morgan, T. (2011). Digital learners not digital natives. La Cuestión Universitaria, 7 , 60–68.
Bullen, M., Morgan, T., & Qayyum, A. (2011). Digital learners in higher education: Generation is not the issue. Canadian Journal of Learning and Technology, 37 (1), 1–24.
Calabretto, J., & Rao, D. (2011). Wikis to support collaboration of pharmacy students in medication management workshops -- a pilot project. International Journal of Pharmacy Education & Practice, 8 (2), 1–12.
Camacho, M. E., Carrión, M. D., Chayah, M., & Campos, J. M. (2016). The use of wiki to promote students’ learning in higher education (Degree in Pharmacy). International Journal of Educational Technology in Higher Education, 13 (1), 1–8 https://doi.org/10.1186/s41239-016-0025-y .
Camus, M., Hurt, N. E., Larson, L. R., & Prevost, L. (2016). Facebook as an online teaching tool: Effects on student participation, learning, and overall course performance. College Teaching, 64 (2), 84–94 https://doi.org/10.1080/87567555.2015.1099093 .
Carini, R. M., Kuh, G. D., & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47 (1), 1–32. doi: 10.1007/s11162-005-8150-9 .
Cassidy, E. D., Colmenares, A., Jones, G., Manolovitz, T., Shen, L., & Vieira, S. (2014). Higher Education and Emerging Technologies: Shifting Trends in Student Usage. The Journal of Academic Librarianship, 40 , 124–133. doi: 10.1016/j.acalib.2014.02.003 .
Center for Postsecondary Research (2016). Engagement insights: Survey findings on the quality of undergraduate education . Retrieved from http://nsse.indiana.edu/NSSE_2016_Results/pdf/NSSE_2016_Annual_Results.pdf .
Center for Postsecondary Research (2017). About NSSE. Retrieved on February 15, 2017 from http://nsse.indiana.edu/html/about.cfm
Cercone, K. (2008). Characteristics of adult learners with implications for online learning design. AACE Journal, 16 (2), 137–159.
Chang, J. W., & Wei, H. Y. (2016). Exploring Engaging Gamification Mechanics in Massive Online Open Courses. Educational Technology & Society, 19 (2), 177–203.
Chawinga, W. D. (2017). Taking social media to a university classroom: teaching and learning using Twitter and blogs. International Journal of Educational Technology in Higher Education, 14 (1), 3 https://doi.org/10.1186/s41239-017-0041-6 .
Chen, B., Seilhamer, R., Bennett, L., & Bauer, S. (2015). Students’ mobile learning practices in higher education: A multi-year study. In EDUCAUSE Review Retrieved from http://er.educause.edu/articles/2015/6/students-mobile-learning-practices-in-higher-education-a-multiyear-study .
Chu, S. K., Chan, C. K., & Tiwari, A. F. (2012). Using blogs to support learning during internship. Computers & Education, 58 (3), 989–1000. doi: 10.1016/j.compedu.2011.08.027 .
Clements, J. C. (2015). Using Facebook to enhance independent student engagement: A case study of first-year undergraduates. Higher Education Studies, 5 (4), 131–146 https://doi.org/10.5539/hes.v5n4p131 .
Coates, H. (2008). Attracting, engaging and retaining: New conversations about learning . Camberwell: Australian Council for Educational Research Retrieved from http://research.acer.edu.au/cgi/viewcontent.cgi?article=1015&context=ausse .
Coffey, D. J., Miller, W. J., & Feuerstein, D. (2011). Classroom as reality: Demonstrating campaign effects through live simulation. Journal of Political Science Education, 7 (1), 14–33.
Coghlan, E., Crawford, J. Little, J., Lomas, C., Lombardi, M., Oblinger, D., & Windham, C. (2007). ELI Discovery Tool: Guide to Blogging . Retrieved from https://net.educause.edu/ir/library/pdf/ELI8006.pdf .
Connolly, T. M., Boyle, E. A., MacArthur, E., Hainey, T., & Boyle, J. M. (2012). A systematic literature review of empirical evidence on computer games and serious games. Computers & Education, 59 , 661–686. doi: 10.1016/j.compedu.2012.03.004 .
Cook, C. W., & Sonnenberg, C. (2014). Technology and online education: Models for change. ASBBS E-Journal, 10 (1), 43–59.
Crocco, F., Offenholley, K., & Hernandez, C. (2016). A proof-of-concept study of game-based learning in higher education. Simulation & Gaming, 47 (4), 403–422. doi: 10.1177/1046878116632484 .
Csikszentmihalyi, M. (1988). The flow experience and its significance for human psychology. In M. Csikszentmihalyi & I. Csikszentmihalyi (Eds.), Optimal experience: Psychological studies of flow in consciousness (pp. 15–13). Cambridge, UK: Cambridge University Press.
Chapter Google Scholar
Dahlstrom, E. (2012). ECAR study of undergraduate students and information technology, 2012 (Research Report). Retrieved from http://net.educause.edu/ir/library/pdf/ERS1208/ERS1208.pdf
de Freitas, S. (2006). Learning in immersive worlds: A review of game-based learning . Retrieved from https://curve.coventry.ac.uk/open/file/aeedcd86-bc4c-40fe-bfdf-df22ee53a495/1/learning%20in%20immersive%20worlds.pdf .
Dichev, C., & Dicheva, D. (2017). Gamifying education: What is known, what is believed and what remains uncertain: A critical review. International Journal of Educational Technology in Higher Education, 14 (9), 1–36. doi: 10.1186/s41239-017-0042-5 .
DiVall, M. V., & Kirwin, J. L. (2012). Using Facebook to facilitate course-related discussion between students and faculty members. American Journal of Pharmaceutical Education, 76 (2), 1–5 https://doi.org/10.5688/ajpe76232 .
Dos, B., & Demir, S. (2013). The analysis of the blogs created in a blended course through the reflective thinking perspective. Educational Sciences: Theory & Practice, 13 (2), 1335–1344.
Dougherty, K., & Andercheck, B. (2014). Using Facebook to engage learners in a large introductory course. Teaching Sociology, 42 (2), 95–104 https://doi.org/10.1177/0092055x14521022 .
Dyson, B., Vickers, K., Turtle, J., Cowan, S., & Tassone, A. (2015). Evaluating the use of Facebook to increase student engagement and understanding in lecture-based classes. Higher Education: The International Journal of Higher Education and Educational Planning, 69 (2), 303–313 https://doi.org/10.1007/s10734-014-9776-3.
Esteves, K. K. (2012). Exploring Facebook to enhance learning and student engagement: A case from the University of Philippines (UP) Open University. Malaysian Journal of Distance Education, 14 (1), 1–15.
Evans, C. (2014). Twitter for teaching: Can social media be used to enhance the process of learning? British Journal of Educational Technology, 45 (5), 902–915 https://doi.org/10.1111/bjet.12099 .
Fagioli, L., Rios-Aguilar, C., & Deil-Amen, R. (2015). Changing the context of student engagement: Using Facebook to increase community college student persistence and success. Teachers College Record, 17 , 1–42.
Farley, P. C. (2013). Using the computer game “FoldIt” to entice students to explore external representations of protein structure in a biochemistry course for nonmajors. Biochemistry and Molecular Biology Education, 41 (1), 56–57 https://doi.org/10.1002/bmb.20655 .
Francescucci, A., & Foster, M. (2013). The VIRI classroom: The impact of blended synchronous online courses on student performance, engagement, and satisfaction. Canadian Journal of Higher Education, 43 (3), 78–91.
Fredricks, J., Blumenfeld, P., & Paris, A. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74 (1), 59–109. doi: 10.3102/00346543074001059 .
Gagnon, K. (2015). Using twitter in health professional education: A case study. Journal of Allied Health, 44 (1), 25–33.
Gandhi, P., Khanna, S., & Ramaswamy, S. (2016). Which industries are the most digital (and why?) . Retrieved from https://hbr.org/2016/04/a-chart-that-shows-which-industries-are-the-most-digital-and-why .
Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. The Internet and Higher Education, 10 (3), 157–172 http://dx.doi.org/10.1016/j.iheduc.2007.04.001 .
Garrity, M. K., Jones, K., VanderZwan, K. J., de la Rocha, A. R., & Epstein, I. (2014). Integrative review of blogging: Implications for nursing education. Journal of Nursing Education, 53 (7), 395–401. doi: 10.3928/01484834-20140620-01 .
Gikas, J., & Grant, M. M. (2013). Mobile computing devices in higher education: Student perspectives on learning with cellphones, smartphones & social media. The Internet and Higher Education, 19 , 18–26 http://dx.doi.org/10.1016/j.iheduc.2013.06.002 .
Gilboy, M. B., Heinerichs, S., & Pazzaglia, G. (2015). Enhancing student engagement using the flipped classroom. Journal of Nutrition Education and Behavior, 47 (1), 109–114 http://dx.doi.org/10.1016/j.jneb.2014.08.008 .
Greenwood, S., Perrin, A., & Duggan, M. (2016). Social media update 2016 . Washington.: Pew Research Center Retrieved from http://www.pewinternet.org/2016/11/11/social-media-update-2016/ .
Grimley, M., Green, R., Nilsen, T., & Thompson, D. (2012). Comparing computer game and traditional lecture using experience ratings from high and low achieving students. Australasian Journal of Educational Technology, 28 (4), 619–638 https://doi.org/10.14742/ajet.831 .
Gunawardena, C. N., Hermans, M. B., Sanchez, D., Richmond, C., Bohley, M., & Tuttle, R. (2009). A theoretical framework for building online communities of practice with social networking tools. Educational Media International, 46 (1), 3–16 https://doi.org/10.1080/09523980802588626 .
Haggis, T. (2009). What have we been thinking of? A critical overview of 40 years of student learning research in higher education. Studies in Higher Education, 34 (4), 377–390. doi: 10.1080/03075070902771903 .
Hauptman, P.H. (2015). Mobile technology in college instruction. Faculty perceptions and barriers to adoption (Doctoral dissertation). Retrieved from ProQuest. (AAI3712404).
Hennessy, C. M., Kirkpatrick, E., Smith, C. F., & Border, S. (2016). Social media and anatomy education: Using twitter to enhance the student learning experience in anatomy. Anatomical Sciences Education, 9 (6), 505–515 https://doi.org/10.1002/ase.1610 .
Hew, K. F., Huang, B., Chu, K. S., & Chiu, D. K. (2016). Engaging Asian students through game mechanics: Findings from two experiment studies. Computers & Education, 93 , 221–236. doi: 10.1016/j.compedu.2015.10.010 .
Hewege, C. R., & Perera, L. R. (2013). Pedagogical significance of wikis: Towards gaining effective learning outcomes. Journal of International Education in Business, 6 (1), 51–70 https://doi.org/10.1108/18363261311314953 .
Hou, H., Wang, S., Lin, P., & Chang, K. (2015). Exploring the learner’s knowledge construction and cognitive patterns of different asynchronous platforms: comparison of an online discussion forum and Facebook. Innovations in Education and Teaching International, 52 (6), 610–620. doi: 10.1080/14703297.2013.847381 .
Hu, S., & McCormick, A. C. (2012). An engagement-based student typology and its relationship to college outcomes. Research in Higher Education, 53 , 738–754. doi: 10.1007/s11162-012-9254-7 .
Hudson, T. M., Knight, V., & Collins, B. C. (2012). Perceived effectiveness of web conferencing software in the digital environment to deliver a graduate course in applied behavior analysis. Rural Special Education Quarterly, 31 (2), 27–39.
Hurt, N. E., Moss, G. S., Bradley, C. L., Larson, L. R., Lovelace, M. D., & Prevost, L. B. (2012). The ‘Facebook’ effect: College students’ perceptions of online discussions in the age of social networking. International Journal for the Scholarship of Teaching & Learning, 6 (2), 1–24 https://doi.org/10.20429/ijsotl.2012.060210 .
Ibáñez, M. B., Di-Serio, A., & Delgado-Kloos, C. (2014). Gamification for engaging computer science students in learning activities: A case study. IEEE Transactions on Learning Technologies, 7 (3), 291–301 https://doi.org/10.1109/tlt.2014.2329293 .
Ivala, E., & Gachago, D. (2012). Social media for enhancing student engagement: The use of facebook and blogs at a university of technology. South African Journal of Higher Education, 26 (1), 152–167.
Johnson, D. R. (2013). Technological change and professional control in the professoriate. Science, Technology & Human Values, 38 (1), 126–149. doi: 10.1177/0162243911430236 .
Junco, R., Elavsky, C. M., & Heiberger, G. (2013). Putting Twitter to the test: Assessing outcomes for student collaboration, engagement and success. British Journal of Educational Technology, 44 (2), 273–287. doi: 10.1111/j.1467-8535.2012.01284.x .
Junco, R., Heibergert, G., & Loken, E. (2011). The effect of Twitter on college student engagement and grades. Journal of Computer Assisted Learning, 27 (2), 119–132. doi: 10.1111/j.1365-2729.2010.00387.x .
Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education, 38 (5), 758–773. doi: 10.1080/03075079.2011.598505 .
Kaware, S. S., & Sain, S. K. (2015). ICT Application in Education: An Overview. International Journal of Multidisciplinary Approach & Studies, 2 (1), 25–32.
Ke, F., Xie, K., & Xie, Y. (2016). Game-based learning engagement: A theory- and data-driven exploration. British Journal of Educational Technology, 47 (6), 1183–1201 https://doi.org/10.1111/bjet.12314 .
Kent, M. (2013). Changing the conversation: Facebook as a venue for online class discussion in higher education. Journal of Online Learning & Teaching, 9 (4), 546–565 https://doi.org/10.1353/rhe.2015.0000 .
Kidd, T., Davis, T., & Larke, P. (2016). Experience, adoption, and technology: Exploring the phenomenological experiences of faculty involved in online teaching at once school of public health. International Journal of E-Learning, 15 (1), 71–99.
Kim, Y., Jeong, S., Ji, Y., Lee, S., Kwon, K. H., & Jeon, J. W. (2015). Smartphone response system using twitter to enable effective interaction and improve engagement in large classrooms. IEEE Transactions on Education, 58 (2), 98–103 https://doi.org/10.1109/te.2014.2329651 .
Kinchin. (2012). Avoiding technology-enhanced non-learning. British Journal of Educational Technology, 43 (2), E43–E48.
Kolb, D. A. (2014). Experiential learning: Experience as the source of learning and development (2nd ed.). Upper Saddle River: Pearson Education, Inc..
Kopcha, T. J., Rieber, L. P., & Walker, B. B. (2016). Understanding university faculty perceptions about innovation in teaching and technology. British Journal of Educational Technology, 47 (5), 945–957. doi: 10.1111/bjet.12361 .
Krause, K., & Coates, H. (2008). Students’ engagement in first-year university. Assessment and Evaluation in Higher Education, 33 (5), 493–505. doi: 10.1080/02602930701698892 .
Kuh, G. D. (2009). The National Survey of Student Engagement: Conceptual and empirical foundations. New Directions for Institutional Research, 141 , 5–20.
Lam, S., Wong, B., Yang, H., & Yi, L. (2012). Understanding student engagement with a contextual model. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of Research on Student Engagement (pp. 403–419). New York: Springer.
Lawrence, B., & Lentle-Keenan, S. (2013). Teaching beliefs and practice, institutional context, and the uptake of Web-based technology. Distance Education, 34 (1), 4–20.
Leach, L. (2016). Enhancing student engagement in one institution. Journal of Further and Higher Education, 40 (1), 23–47.
Lester, D. (2013). A review of the student engagement literature. Focus on Colleges, Universities, and Schools, 7 (1), 1–8.
Lewis, C. C., Fretwell, C. E., Ryan, J., & Parham, J. B. (2013). Faculty use of established and emerging technologies in higher education: A unified theory of acceptance and use of technology perspective. International Journal of Higher Education, 2 (2), 22–34 http://dx.doi.org/10.5430/ijhe.v2n2p22 .
Lin, C., Singer, R., & Ha, L. (2010). Why university members use and resist technology? A structure enactment perspective. Journal of Computing in Higher Education, 22 (1), 38–59. doi: 10.1007/s12528-010-9028-1 .
Linder-VanBerschot, J. A., & Summers, L. L. (2015). Designing instruction in the face of technology transience. Quarterly Review of Distance Education, 16 (2), 107–118.
Liu, C., Cheng, Y., & Huang, C. (2011). The effect of simulation games on the learning of computational problem solving. Computers & Education, 57 (3), 1907–1918 https://doi.org/10.1016/j.compedu.2011.04.002 .
Lu, J., Hallinger, P., & Showanasai, P. (2014). Simulation-based learning in management education: A longitudinal quasi-experimental evaluation of instructional effectiveness. Journal of Management Development, 33 (3), 218–244. doi: 10.1108/JMD-11-2011-0115 .
Maben, S., Edwards, J., & Malone, D. (2014). Online engagement through Facebook groups in face-to-face undergraduate communication courses: A case study. Southwestern Mass Communication Journal, 29 (2), 1–27.
Manca, S., & Ranieri, M. (2013). Is it a tool suitable for learning? A critical review of the literature on Facebook as a technology-enhanced learning environment. Journal of Computer Assisted Learning, 29 (6), 487–504. doi: 10.1111/jcal.12007 .
Mansouri, S. A., & Piki, A. (2016). An exploration into the impact of blogs on students’ learning: Case studies in postgraduate business education. Innovations in Education And Teaching International, 53 (3), 260–273 http://dx.doi.org/10.1080/14703297.2014.997777 .
Marriott, P., Tan, S. W., & Marriot, N. (2015). Experiential learning – A case study of the use of computerized stock market trading simulation in finance education. Accounting Education, 24 (6), 480–497 http://dx.doi.org/10.1080/09639284.2015.1072728 .
Martin, F., Parker, M. A., & Deale, D. F. (2012). Examining interactivity in synchronous virtual classrooms. International Review of Research in Open and Distance Learning, 13 (3), 227–261.
Martin, K., Goldwasser, M., & Galentino, R. (2017). Impact of Cohort Bonds on Student Satisfaction and Engagement. Current Issues in Education, 19 (3), 1–14.
Martínez, A. A., Medina, F. X., Albalat, J. A. P., & Rubió, F. S. (2013). Challenges and opportunities of 2.0 tools for the interdisciplinary study of nutrition: The case of the Mediterranean Diet wiki. International Journal of Educational Technology in Higher Education, 10 (1), 210–225 https://doi.org/10.7238/rusc.v10i1.1341 .
McBrien, J. L., Jones, P., & Cheng, R. (2009). Virtual spaces: Employing a synchronous online classroom to facilitate student engagement in online learning. International Review of Research in Open and Distance Learning, 10 (3), 1–17 https://doi.org/10.19173/irrodl.v10i3.605 .
McClenney, K., Marti, C. N., & Adkins, C. (2012). Student engagement and student outcomes: Key findings from “CCSSE” validation research . Austin: Community College Survey of Student Engagement.
McKay, M., Sanko, J., Shekhter, I., & Birnbach, D. (2014). Twitter as a tool to enhance student engagement during an interprofessional patient safety course. Journal of Interprofessional Care, 28 (6), 565–567 https://doi.org/10.3109/13561820.2014.912618 .
Miller, A. D., Norris, L. B., & Bookstaver, P. B. (2012). Use of wikis in pharmacy hybrid elective courses. Currents in Pharmacy Teaching & Learning, 4 (4), 256–261. doi: 10.1016/j.cptl.2012.05.004 .
Morley, D. A. (2012). Enhancing networking and proactive learning skills in the first year university experience through the use of wikis. Nurse Education Today, 32 (3), 261–266.
Mysko, C., & Delgaty, L. (2015). How and why are students using Twitter for #meded? Integrating Twitter into undergraduate medical education to promote active learning. Annual Review of Education, Communication & Language Sciences, 12 , 24–52.
Nadolny, L., & Halabi, A. (2016). Student participation and achievement in a large lecture course with game-based learning. Simulation and Gaming, 47 (1), 51–72. doi: 10.1177/1046878115620388 .
Naghdipour, B., & Eldridge, N. H. (2016). Incorporating social networking sites into traditional pedagogy: A case of facebook. TechTrends, 60 (6), 591–597 http://dx.doi.org/10.1007/s11528-016-0118-4 .
Nakamaru, S. (2012). Investment and return: Wiki engagement in a “remedial” ESL writing course. Journal of Research on Technology in Education, 44 (4), 273–291.
Nelson, R. (2016). Apple’s app store will hit 5 million apps by 2020, more than doubling its current size . Retrieved from https://sensortower.com/blog/app-store-growth-forecast-2020 .
Nora, A., Barlow, E., & Crisp, G. (2005). Student persistence and degree attainment beyond the first year in college. In A. Seidman (Ed.), College Student Retention (pp. 129–154). Westport: Praeger Publishers.
Osgerby, J., & Rush, D. (2015). An exploratory case study examining undergraduate accounting students’ perceptions of using Twitter as a learning support tool. International Journal of Management Education, 13 (3), 337–348. doi: 10.1016/j.ijme.2015.10.002 .
Pace, C. R. (1980). Measuring the quality of student effort. Current Issues in Higher Education, 2 , 10–16.
Pace, C. R. (1984). Student effort: A new key to assessing quality . Los Angeles: University of California, Higher Education Research Institute.
Paul, J. A., & Cochran, J. D. (2013). Key interactions for online programs between faculty, students, technologies, and educational institutions: A holistic framework. Quarterly Review of Distance Education, 14 (1), 49–62.
Pellas, N. (2014). The influence of computer self-efficacy, metacognitive self-regulation, and self-esteem on student engagement in online learning programs: Evidence from the virtual world of Second Life. Computers in Human Behavior, 35 , 157–170. doi: 10.1016/j.chb.2014.02.048 .
Poole, S. M., Kemp, E., Williams, K. H., & Patterson, L. (2014). Get your head in the game: Using gamification in business education to connect with Generation Y. Journal for Excellence in Business Education, 3 (2), 1–9.
Poushter, J. (2016). Smartphone ownership and internet usage continues to climb in emerging economies . Washington, D.C.: Pew Research Center Retrieved from http://www.pewglobal.org/2016/02/22/smartphone-ownership-and-internet-usage-continues-to-climb-in-emerging-economies/ .
Prestridge, S. (2014). A focus on students’ use of Twitter - their interactions with each other, content and interface. Active Learning in Higher Education, 15 (2), 101–115.
Rambe, P. (2012). Activity theory and technology mediated interaction: Cognitive scaffolding using question-based consultation on “Facebook”. Australasian Journal of Educational Technology, 28 (8), 1333–1361 https://doi.org/10.14742/ajet.775 .
Reid, P. (2014). Categories for barriers to adoption of instructional technologies. Education and Information Technologies, 19 (2), 383–407.
Revere, L., & Kovach, J. V. (2011). Online technologies for engagement learning: A meaningful synthesis for educators. Quarterly Review of Distance Education, 12 (2), 113–124.
Richardson, J. C., & Newby, T. (2006). The role of students’ cognitive engagement in online learning. American Journal of Distance Education, 20 (1), 23–37 http://dx.doi.org/10.1207/s15389286ajde2001_3 .
Ross, H. M., Banow, R., & Yu, S. (2015). The use of Twitter in large lecture courses: Do the students see a benefit? Contemporary Educational Technology, 6 (2), 126–139.
Roussinos, D., & Jimoyiannis, A. (2013). Analysis of students’ participation patterns and learning presence in a wiki-based project. Educational Media International, 50 (4), 306–324 https://doi.org/10.1080/09523987.2013.863471 .
Salaber, J. (2014). Facilitating student engagement and collaboration in a large postgraduate course using wiki-based activities. International Journal of Management Education, 12 (2), 115–126. doi: 10.1016/j.ijme.2014.03.006 .
Scarlet, J., & Ampolos, L. (2013). Using game-based learning to teach psychopharmacology. Psychology Learning and Teaching, 12 (1), 64–70 https://doi.org/10.2304/plat.2013.12.1.64 .
Sharma, P., & Tietjen, P. (2016). Examining patterns of participation and meaning making in student blogs: A case study in higher education. American Journal of Distance Education, 30 (1), 2–13 http://dx.doi.org/10.1080/08923647.2016.1119605 .
Shraim, K. Y. (2014). Pedagogical innovation within Facebook: A case study in tertiary education in Palestine. International Journal of Emerging Technologies in Learning, 9 (8), 25–31. doi: 10.3991/ijet.v9i8.3805 .
Siddique, Z., Ling, C., Roberson, P., Xu, Y., & Geng, X. (2013). Facilitating higher-order learning through computer games. Journal of Mechanical Design, 135 (12), 121004–121010.
Smith, A., & Anderson, M. (2016). Online Shopping and E-Commerce . Washington, D.C.: Pew Research Center Retrieved from http://www.pewinternet.org/2016/12/19/online-shopping-and-e-commerce/ .
Staines, Z., & Lauchs, M. (2013). Students’ engagement with Facebook in a university undergraduate policing unit. Australasian Journal of Educational Technology, 29 (6), 792–805 https://doi.org/10.14742/ajet.270 .
Sun, A., & Chen, X. (2016). Online education and its effective practice: A research review. Journal of Information Technology Education: Research, 15 , 157–190.
Tiernan, P. (2014). A study of the use of Twitter by students for lecture engagement and discussion. Education and Information Technologies, 19 (4), 673–690 https://doi.org/10.1007/s10639-012-9246-4 .
Trowler, V. (2010). Student engagement literature review . Lancaster: Lancaster University Retrieved from http://www.lancaster.ac.uk/staff/trowler/StudentEngagementLiteratureReview.pdf .
Trowler, V., & Trowler, P. (2010). Student engagement evidence summary . Lancaster: Lancaster University Retrieved from http://eprints.lancs.ac.uk/61680/1/Deliverable_2._Evidence_Summary._Nov_2010.pdf .
van Beynen, K., & Swenson, C. (2016). Exploring peer-to-peer library content and engagement on a student-run Facebook group. College & Research Libraries, 77 (1), 34–50 https://doi.org/10.5860/crl.77.1.34 .
Wang, S. (2008). Blogs in education. In M. Pagani (Ed.), Encyclopedia of Multimedia Technology and Networking (2nd ed., pp. 134–139). Hershey: Information Sciences Reference.
Wdowik, S. (2014). Using a synchronous online learning environment to promote and enhance transactional engagement beyond the classroom. Campus — Wide Information Systems, 31 (4), 264–275. doi: 10.1108/CWIS-10-2013-0057 .
Weibel, D., Wissmath, B., Habegger, S., Steiner, Y., & Groner, R. (2008). Playing online games against computer-vs. human-controlled opponents: Effects on presence, flow, and enjoyment. Computers in Human Behavior, 24 (5), 2274–2291 https://doi.org/10.1016/j.chb.2007.11.002 .
West, B., Moore, H., & Barry, B. (2015). Beyond the tweet: Using Twitter to enhance engagement, learning, and success among first-year students. Journal of Marketing Education, 37 (3), 160–170. doi: 10.1177/0273475315586061 .
Westera, W. (2015). Reframing the role of educational media technologies. Quarterly Review of Distance Education, 16 (2), 19–32.
Whitton, N. (2011). Game engagement theory and adult learning. Simulation & Gaming, 42 (5), 596–609.
Williams, D., & Whiting, A. (2016). Exploring the relationship between student engagement, Twitter, and a learning management system: A study of undergraduate marketing students. International Journal of Teaching & Learning in Higher Education, 28 (3), 302–313.
Wimpenny, K., & Savin-Baden, M. (2013). Alienation, agency, and authenticity: A synthesis of the literature on student engagement. Teaching in Higher Education, 18 (3), 311–326. doi: 10.1080/13562517.2012.725223 .
Witkowski, P., & Cornell, T. (2015). An Investigation into Student Engagement in Higher Education Classrooms. InSight: A Journal of Scholarly Teaching, 10 , 56–67.
Wright, G. B. (2011). Student-centered learning in higher education. International Journal of Teaching and Learning in Higher Education, 23 (3), 92–97.
Yang, C., & Chang, Y. (2012). Assessing the effects of interactive blogging on student attitudes towards peer interaction, learning motivation, and academic achievements. Journal of Computer Assisted Learning, 28 (2), 126–135 https://doi.org/10.1111/j.1365-2729.2011.00423.x .
Zepke, N. (2014). Student engagement research in higher education: questioning an academic orthodoxy. Teaching in Higher Education, 19 (6), 697–708 http://dx.doi.org/10.1080/13562517.2014.901956 .
Zepke, N., & Leach, L. (2010). Improving student engagement: Ten proposals for action. Active Learning in Higher Education, 11 (3), 167–177. doi: 10.1177/1469787410379680 .
Zickuhr, K., & Raine, L. (2014). E-reading rises as device ownership jumps . Washington, D.C.: Pew Research Center Retrieved from http://www.pewinternet.org/2014/01/16/e-reading-rises-as-device-ownership-jumps/ .
Zimmermann, L. K. (2013). Using a virtual simulation program to teach child development. College Teaching, 61 (4), 138–142. doi: 10.1080/87567555.2013.817377 .
Download references
Acknowledgements
Not applicable.
This research was supported in part by a Laureate Education, Incl. David A. Wilson research grant study awarded to the second author, “A Comparative Analysis of Student Engagement and Critical Thinking in Two Approaches to the Online Classroom”.
Availability of data and materials
Authors’ contributions.
The first and second authors contributed significantly to the writing, review, and conceptual thinking of the manuscript. The third author provided a first detailed outline of what the paper could address, and the fourth offer provided input and feedback through critical review. All authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Consent for publication
Ethics approval and consent to participate.
The parent study was approved by the University of Liverpool Online International Online Ethics Review Committee, approval number 04-24-2015-01.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author information
Authors and affiliations.
University of Liverpool Online, Liverpool, UK
Laura A. Schindler & Osama A. Morad
Laureate Education, Inc., Baltimore, USA
Gary J. Burkholder
Walden University, Minneapolis, USA
University of Lincoln, Lincoln, UK
Craig Marsh
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Laura A. Schindler .
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Reprints and Permissions
About this article
Cite this article.
Schindler, L.A., Burkholder, G.J., Morad, O.A. et al. Computer-based technology and student engagement: a critical review of the literature. Int J Educ Technol High Educ 14 , 25 (2017). https://doi.org/10.1186/s41239-017-0063-0
Download citation
Received : 31 March 2017
Accepted : 06 June 2017
Published : 02 October 2017
DOI : https://doi.org/10.1186/s41239-017-0063-0
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Social networking
Computer Technology Research Paper Topics

This list of computer technology research paper topics provides the list of 33 potential topics for research papers and an overview article on the history of computer technology.
1. Analog Computers
Paralleling the split between analog and digital computers, in the 1950s the term analog computer was a posteriori projected onto pre-existing classes of mechanical, electrical, and electromechanical computing artifacts, subsuming them under the same category. The concept of analog, like the technical demarcation between analog and digital computer, was absent from the vocabulary of those classifying artifacts for the 1914 Edinburgh Exhibition, the first world’s fair emphasizing computing technology, and this leaves us with an invaluable index of the impressive number of classes of computing artifacts amassed during the few centuries of capitalist modernity. True, from the debate between ‘‘smooth’’ and ‘‘lumpy’’ artificial lines of computing (1910s) to the differentiation between ‘‘continuous’’ and ‘‘cyclic’’ computers (1940s), the subsequent analog–digital split became possible by the multitudinous accumulation of attempts to decontextualize the computer from its socio-historical use alternately to define the ideal computer technically. The fact is, however, that influential classifications of computing technology from the previous decades never provided an encompassing demarcation compared to the analog– digital distinction used since the 1950s. Historians of the digital computer find that the experience of working with software was much closer to art than science, a process that was resistant to mass production; historians of the analog computer find this to have been typical of working with the analog computer throughout all its aspects. The historiography of the progress of digital computing invites us to turn to the software crisis, which perhaps not accidentally, surfaced when the crisis caused by the analog ended. Noticeably, it was not until the process of computing with a digital electronic computer became sufficiently visual by the addition of a special interface—to substitute for the loss of visualization that was previously provided by the analog computer—that the analog computer finally disappeared.
2. Artificial Intelligence
Artificial intelligence (AI) is the field of software engineering that builds computer systems and occasionally robots to perform tasks that require intelligence. The term ‘‘artificial intelligence’’ was coined by John McCarthy in 1958, then a graduate student at Princeton, at a summer workshop held at Dartmouth in 1956. This two-month workshop marks the official birth of AI, which brought together young researchers who would nurture the field as it grew over the next several decades: Marvin Minsky, Claude Shannon, Arthur Samuel, Ray Solomonoff, Oliver Selfridge, Allen Newell, and Herbert Simon. It would be difficult to argue that the technologies derived from AI research had a profound effect on our way of life by the beginning of the 21st century. However, AI technologies have been successfully applied in many industrial settings, medicine and health care, and video games. Programming techniques developed in AI research were incorporated into more widespread programming practices, such as high-level programming languages and time-sharing operating systems. While AI did not succeed in constructing a computer which displays the general mental capabilities of a typical human, such as the HAL computer in Arthur C. Clarke and Stanley Kubrick’s film 2001: A Space Odyssey, it has produced programs that perform some apparently intelligent tasks, often at a much greater level of skill and reliability than humans. More than this, AI has provided a powerful and defining image of what computer technology might someday be capable of achieving.
3. Computer and Video Games
Interactive computer and video games were first developed in laboratories as the late-night amusements of computer programmers or independent projects of television engineers. Their formats include computer software; networked, multiplayer games on time-shared systems or servers; arcade consoles; home consoles connected to television sets; and handheld game machines. The first experimental projects grew out of early work in computer graphics, artificial intelligence, television technology, hardware and software interface development, computer-aided education, and microelectronics. Important examples were Willy Higinbotham’s oscilloscope-based ‘‘Tennis for Two’’ at the Brookhaven National Laboratory (1958); ‘‘Spacewar!,’’ by Steve Russell, Alan Kotok, J. Martin Graetz and others at the Massachusetts Institute of Technology (1962); Ralph Baer’s television-based tennis game for Sanders Associates (1966); several networked games from the PLATO (Programmed Logic for Automatic Teaching Operations) Project at the University of Illinois during the early 1970s; and ‘‘Adventure,’’ by Will Crowther of Bolt, Beranek & Newman (1972), extended by Don Woods at Stanford University’s Artificial Intelligence Laboratory (1976). The main lines of development during the 1970s and early 1980s were home video consoles, coin-operated arcade games, and computer software.
4. Computer Displays
The display is an essential part of any general-purpose computer. Its function is to act as an output device to communicate data to humans using the highest bandwidth input system that humans possess—the eyes. Much of the development of computer displays has been about trying to get closer to the limits of human visual perception in terms of color and spatial resolution. Mainframe and minicomputers used ‘‘terminals’’ to display the output. These were fed data from the host computer and processed the data to create screen images using a graphics processor. The display was typically integrated with a keyboard system and some communication hardware as a terminal or video display unit (VDU) following the basic model used for teletypes. Personal computers (PCs) in the late 1970s and early 1980s changed this model by integrating the graphics controller into the computer chassis itself. Early PC displays typically displayed only monochrome text and communicated in character codes such as ASCII. Line-scanning frequencies were typically from 15 to 20 kilohertz—similar to television. CRT displays rapidly developed after the introduction of video graphics array (VGA) technology (640 by 480 pixels in16 colors) in the mid-1980s and scan frequencies rose to 60 kilohertz or more for mainstream displays; 100 kilohertz or more for high-end displays. These displays were capable of displaying formats up to 2048 by 1536 pixels with high color depths. Because the human eye is very quick to respond to visual stimulation, developments in display technology have tended to track the development of semiconductor technology that allows the rapid manipulation of the stored image.
5. Computer Memory for Personal Computers
During the second half of the twentieth century, the two primary methods used for the long-term storage of digital information were magnetic and optical recording. These methods were selected primarily on the basis of cost. Compared to core or transistorized random-access memory (RAM), storage costs for magnetic and optical media were several orders of magnitude cheaper per bit of information and were not volatile; that is, the information did not vanish when electrical power was turned off. However, access to information stored on magnetic and optical recorders was much slower compared to RAM memory. As a result, computer designers used a mix of both types of memory to accomplish computational tasks. Designers of magnetic and optical storage systems have sought meanwhile to increase the speed of access to stored information to increase the overall performance of computer systems, since most digital information is stored magnetically or optically for reasons of cost.
6. Computer Modeling
Computer simulation models have transformed the natural, engineering, and social sciences, becoming crucial tools for disciplines as diverse as ecology, epidemiology, economics, urban planning, aerospace engineering, meteorology, and military operations. Computer models help researchers study systems of extreme complexity, predict the behavior of natural phenomena, and examine the effects of human interventions in natural processes. Engineers use models to design everything from jets and nuclear-waste repositories to diapers and golf clubs. Models enable astrophysicists to simulate supernovas, biochemists to replicate protein folding, geologists to predict volcanic eruptions, and physiologists to identify populations at risk of lead poisoning. Clearly, computer models provide a powerful means of solving problems, both theoretical and applied.
7. Computer Networks
Computers and computer networks have changed the way we do almost everything—the way we teach, learn, do research, access or share information, communicate with each other, and even the way we entertain ourselves. A computer network, in simple terms, consists of two or more computing devices (often called nodes) interconnected by means of some medium capable of transmitting data that allows the computers to communicate with each other in order to provide a variety of services to users.
8. Computer Science
Computer science occupies a unique position among the scientific and technical disciplines. It revolves around a specific artifact—the electronic digital computer—that touches upon a broad and diverse set of fields in its design, operation, and application. As a result, computer science represents a synthesis and extension of many different areas of mathematics, science, engineering, and business.
9. Computer-Aided Control Technology
The story of computer-aided control technology is inextricably entwined with the modern history of automation. Automation in the first half of the twentieth century involved (often analog) processes for continuous automatic measurement and control of hardware by hydraulic, mechanical, or electromechanical means. These processes facilitated the development and refinement of battlefield fire-control systems, feedback amplifiers for use in telephony, electrical grid simulators, numerically controlled milling machines, and dozens of other innovations.
10. Computer-Aided Design and Manufacture
Computer-aided design and manufacture, known by the acronym CAD/CAM, is a process for manufacturing mechanical components, wherein computers are used to link the information needed in and produced by the design process to the information needed to control the machine tools that produce the parts. However, CAD/CAM actually constitutes two separate technologies that developed along similar, but unrelated, lines until they were combined in the 1970s.
11. Computer-User Interface
A computer interface is the point of contact between a person and an electronic computer. Today’s interfaces include a keyboard, mouse, and display screen. Computer user interfaces developed through three distinct stages, which can be identified as batch processing, interactive computing, and the graphical user interface (GUI). Today’s graphical interfaces support additional multimedia features, such as streaming audio and video. In GUI design, every new software feature introduces more icons into the process of computer– user interaction. Presently, the large vocabulary of icons used in GUI design is difficult for users to remember, which creates a complexity problem. As GUIs become more complex, interface designers are adding voice recognition and intelligent agent technologies to make computer user interfaces even easier to operate.
12. Early Computer Memory
Mechanisms to store information were present in early mechanical calculating machines, going back to Charles Babbage’s analytical engine proposed in the 1830s. It introduced the concept of the ‘‘store’’ and, if ever built, would have held 1000 numbers of up to 50 decimal digits. However, the move toward base-2 or binary computing in the 1930s brought about a new paradigm in technology—the digital computer, whose most elementary component was an on–off switch. Information on a digital system is represented using a combination of on and off signals, stored as binary digits (shortened to bits): zeros and ones. Text characters, symbols, or numerical values can all be coded as bits, so that information stored in digital memory is just zeros and ones, regardless of the storage medium. The history of computer memory is closely linked to the history of computers but a distinction should be made between primary (or main) and secondary memory. Computers only need operate on one segment of data at a time, and with memory being a scarce resource, the rest of the data set could be stored in less expensive and more abundant secondary memory.
13. Early Digital Computers
Digital computers were a marked departure from the electrical and mechanical calculating and computing machines in wide use from the early twentieth century. The innovation was of information being represented using only two states (on or off), which came to be known as ‘‘digital.’’ Binary (base 2) arithmetic and logic provided the tools for these machines to perform useful functions. George Boole’s binary system of algebra allowed any mathematical equation to be represented by simply true or false logic statements. By using only two states, engineering was also greatly simplified, and universality and accuracy increased. Further developments from the early purpose-built machines, to ones that were programmable accompanied by many key technological developments, resulted in the well-known success and proliferation of the digital computer.
14. Electronic Control Technology
The advancement of electrical engineering in the twentieth century made a fundamental change in control technology. New electronic devices including vacuum tubes (valves) and transistors were used to replace electromechanical elements in conventional controllers and to develop new types of controllers. In these practices, engineers discovered basic principles of control theory that could be further applied to design electronic control systems.
15. Encryption and Code Breaking
The word cryptography comes from the Greek words for ‘‘hidden’’ (kryptos) and ‘‘to write’’ (graphein)—literally, the science of ‘‘hidden writing.’’ In the twentieth century, cryptography became fundamental to information technology (IT) security generally. Before the invention of the digital computer at mid-century, national governments across the world relied on mechanical and electromechanical cryptanalytic devices to protect their own national secrets and communications, as well as to expose enemy secrets. Code breaking played an important role in both World Wars I and II, and the successful exploits of Polish and British cryptographers and signals intelligence experts in breaking the code of the German Enigma ciphering machine (which had a range of possible transformations between a message and its code of approximately 150 trillion (or 150 million million million) are well documented.
16. Error Checking and Correction
In telecommunications, whether transmission of data or voice signals is over copper, fiber-optic, or wireless links, information coded in the signal transmitted must be decoded by the receiver from a background of noise. Signal errors can be introduced, for example from physical defects in the transmission medium (semiconductor crystal defects, dust or scratches on magnetic memory, bubbles in optical fibers), from electromagnetic interference (natural or manmade) or cosmic rays, or from cross-talk (unwanted coupling) between channels. In digital signal transmission, data is transmitted as ‘‘bits’’ (ones or zeros, corresponding to on or off in electronic circuits). Random bit errors occur singly and in no relation to each other. Burst error is a large, sustained error or loss of data, perhaps caused by transmission problems in the connecting cables, or sudden noise. Analog to digital conversion can also introduce sampling errors.
17. Global Positioning System (GPS)
The NAVSTAR (NAVigation System Timing And Ranging) Global Positioning System (GPS) provides an unlimited number of military and civilian users worldwide with continuous, highly accurate data on their position in four dimensions— latitude, longitude, altitude, and time— through all weather conditions. It includes space, control, and user segments (Figure 6). A constellation of 24 satellites in 10,900 nautical miles, nearly circular orbits—six orbital planes, equally spaced 60 degrees apart, inclined approximately 55 degrees relative to the equator, and each with four equidistant satellites—transmits microwave signals in two different L-band frequencies. From any point on earth, between five and eight satellites are ‘‘visible’’ to the user. Synchronized, extremely precise atomic clocks—rubidium and cesium— aboard the satellites render the constellation semiautonomous by alleviating the need to continuously control the satellites from the ground. The control segment consists of a master facility at Schriever Air Force Base, Colorado, and a global network of automated stations. It passively tracks the entire constellation and, via an S-band uplink, periodically sends updated orbital and clock data to each satellite to ensure that navigation signals received by users remain accurate. Finally, GPS users—on land, at sea, in the air or space—rely on commercially produced receivers to convert satellite signals into position, time, and velocity estimates.
18. Gyrocompass and Inertial Guidance
Before the twentieth century, navigation at sea employed two complementary methods, astronomical and dead reckoning. The former involved direct measurements of celestial phenomena to ascertain position, while the latter required continuous monitoring of a ship’s course, speed, and distance run. New navigational technology was required not only for iron ships in which traditional compasses required correction, but for aircraft and submarines in which magnetic compasses cannot be used. Owing to their rapid motion, aircraft presented challenges for near instantaneous navigation data collection and reduction. Electronics furnished the exploitation of radio and the adaptation of a gyroscope to direction finding through the invention of the nonmagnetic gyrocompass.
Although the Cold War arms race after World War II led to the development of inertial navigation, German manufacture of the V-2 rocket under the direction of Wernher von Braun during the war involved a proto-inertial system, a two-gimballed gyro with an integrator to determine speed. Inertial guidance combines a gyrocompass with accelerometers installed along orthogonal axes, devices that record all accelerations of the vehicle in which inertial guidance has been installed. With this system, if the initial position of the vehicle is known, then the vehicle’s position at any moment is known because integrators record all directions and accelerations and calculate speeds and distance run. Inertial guidance devices can subtract accelerations due to gravity or other motions of the vehicle. Because inertial guidance does not depend on an outside reference, it is the ultimate dead reckoning system, ideal for the nuclear submarines for which they were invented and for ballistic missiles. Their self-contained nature makes them resistant to electronic countermeasures. Inertial systems were first installed in commercial aircraft during the 1960s. The expense of manufacturing inertial guidance mechanisms (and their necessary management by computer) has limited their application largely to military and some commercial purposes. Inertial systems accumulate errors, so their use at sea (except for submarines) has been as an adjunct to other navigational methods, unlike aircraft applications. Only the development of the global positioning system (GPS) at the end of the century promised to render all previous navigational technologies obsolete. Nevertheless, a range of technologies, some dating to the beginning of the century, remain in use in a variety of commercial and leisure applications.
19. Hybrid Computers
Following the emergence of the analog–digital demarcation in the late 1940s—and the ensuing battle between a speedy analog versus the accurate digital—the term ‘‘hybrid computer’’ surfaced in the early 1960s. The assumptions held by the adherents of the digital computer—regarding the dynamic mechanization of computational labor to accompany the equally dynamic increase in computational work—was becoming a universal ideology. From this perspective, the digital computer justly appeared to be technically superior. In introducing the digital computer to social realities, however, extensive interaction with the experienced analog computer adherents proved indispensable, especially given that the digital proponents’ expectation of progress by employing the available and inexpensive hardware was stymied by the lack of inexpensive software. From this perspective—as historiographically unwanted it may be by those who agree with the essentialist conception of the analog–digital demarcation—the history of the hybrid computer suggests that the computer as we now know it was brought about by linking the analog and the digital, not by separating them. Placing the ideal analog and the ideal digital at the two poles, all computing techniques that combined some features of both fell beneath ‘‘hybrid computation’’; the designators ‘‘balanced’’ or ‘‘true’’ were preserved for those built with appreciable amounts of both. True hybrids fell into the middle spectrum that included: pure analog computers, analog computers using digital-type numerical analysis techniques, analog computers programmed with the aid of digital computers, analog computers using digital control and logic, analog computers using digital subunits, analog computers using digital computers as peripheral equipment, balanced hybrid computer systems, digital computers using analog subroutines, digital computers with analog arithmetic elements, digital computers designed to permit analog-type programming, digital computers with analog-oriented compilers and interpreters, and pure digital computers.
20. Information Theory
Information theory, also known originally as the mathematical theory of communication, was first explicitly formulated during the mid-twentieth century. Almost immediately it became a foundation; first, for the more systematic design and utilization of numerous telecommunication and information technologies; and second, for resolving a paradox in thermodynamics. Finally, information theory has contributed to new interpretations of a wide range of biological and cultural phenomena, from organic physiology and genetics to cognitive behavior, human language, economics, and political decision making. Reflecting the symbiosis between theory and practice typical of twentieth century technology, technical issues in early telegraphy and telephony gave rise to a proto-information theory developed by Harry Nyquist at Bell Labs in 1924 and Ralph Hartley, also at Bell Labs, in 1928. This theory in turn contributed to advances in telecommunications, which stimulated the development of information theory per se by Claude Shannon and Warren Weaver, in their book The Mathematical Theory of Communication published in 1949. As articulated by Claude Shannon, a Bell Labs researcher, the technical concept of information is defined by the probability of a specific message or signal being picked out from a number of possibilities and transmitted from A to B. Information in this sense is mathematically quantifiable. The amount of information, I, conveyed by signal, S, is inversely related to its probability, P. That is, the more improbable a message, the more information it contains. To facilitate the mathematical analysis of messages, the measure is conveniently defined as I ¼ log2 1/P(S), and is named a binary digit or ‘‘bit’’ for short. Thus in the simplest case of a two-state signal (1 or 0, corresponding to on or off in electronic circuits), with equal probability for each state, the transmission of either state as the code for a message would convey one bit of information. The theory of information opened up by this conceptual analysis has become the basis for constructing and analyzing digital computational devices and a whole range of information technologies (i.e., technologies including telecommunications and data processing), from telephones to computer networks.
21. Internet
The Internet is a global computer network of networks whose origins are found in U.S. military efforts. In response to Sputnik and the emerging space race, the Advanced Research Projects Agency (ARPA) was formed in 1958 as an agency of the Pentagon. The researchers at ARPA were given a generous mandate to develop innovative technologies such as communications.
In 1962, psychologist J.C.R. Licklider from the Massachusetts Institute of Technology’s Lincoln Laboratory joined ARPA to take charge of the Information Processing Techniques Office (IPTO). In 1963 Licklider wrote a memo proposing an interactive network allowing people to communicate via computer. This project did not materialize. In 1966, Bob Taylor, then head of the IPTO, noted that he needed three different computer terminals to connect to three different machines in different locations around the nation. Taylor also recognized that universities working with IPTO needed more computing resources. Instead of the government buying machines for each university, why not share machines? Taylor revitalized Licklider’s idea, securing $1 million in funding, and hired 29-yearold Larry Roberts to direct the creation of ARPAnet.
In 1974, Robert Kahn and Vincent Cerf proposed the first internet-working protocol, a way for datagrams (packets) to be communicated between disparate networks, and they called it an ‘‘internet.’’ Their efforts created transmission control protocol/internet protocol (TCP/IP). In 1982, TCP/IP replaced NCP on ARPAnet. Other networks adopted TCP/IP and it became the dominant standard for all networking by the late 1990s.
In 1981 the U.S. National Science Foundation (NSF) created Computer Science Network (CSNET) to provide universities that did not have access to ARPAnet with their own network. In 1986, the NSF sponsored the NSFNET ‘‘backbone’’ to connect five supercomputing centers. The backbone also connected ARPAnet and CSNET together, and the idea of a network of networks became firmly entrenched. The open technical architecture of the Internet allowed numerous innovations to be grafted easily onto the whole. When ARPAnet was dismantled in 1990, the Internet was thriving at universities and technology- oriented companies. The NSF backbone was dismantled in 1995 when the NSF realized that commercial entities could keep the Internet running and growing on their own, without government subsidy. Commercial network providers worked through the Commercial Internet Exchange to manage network traffic.
22. Mainframe Computers
The term ‘‘computer’’ currently refers to a general-purpose, digital, electronic, stored-program calculating machine. The term ‘‘mainframe’’ refers to a large, expensive, multiuser computer, able to handle a wide range of applications. The term was derived from the main frame or cabinet in which the central processing unit (CPU) and main memory of a computer were kept separate from those cabinets that held peripheral devices used for input and output.
Computers are generally classified as supercomputers, mainframes, minicomputers, or microcomputers. This classification is based on factors such as processing capability, cost, and applications, with supercomputers the fastest and most expensive. All computers were called mainframes until the 1960s, including the first supercomputer, the naval ordnance research calculator (NORC), offered by International Business Machines (IBM) in 1954. In 1960, Digital Equipment Corporation (DEC) shipped the PDP-1, a computer that was much smaller and cheaper than a mainframe.
Mainframes once each filled a large room, cost millions of dollars, and needed a full maintenance staff, partly in order to repair the damage caused by the heat generated by their vacuum tubes. These machines were characterized by proprietary operating systems and connections through dumb terminals that had no local processing capabilities. As personal computers developed and began to approach mainframes in speed and processing power, however, mainframes have evolved to support a client/server relationship, and to interconnect with open standard-based systems. They have become particularly useful for systems that require reliability, security, and centralized control. Their ability to process large amounts of data quickly make them particularly valuable for storage area networks (SANs). Mainframes today contain multiple CPUs, providing additional speed through multiprocessing operations. They support many hundreds of simultaneously executing programs, as well as numerous input and output processors for multiplexing devices, such as video display terminals and disk drives. Many legacy systems, large applications that have been developed, tested, and used over time, are still running on mainframes.
23. Mineral Prospecting
Twentieth century mineral prospecting draws upon the accumulated knowledge of previous exploration and mining activities, advancing technology, expanding knowledge of geologic processes and deposit models, and mining and processing capabilities to determine where and how to look for minerals of interest. Geologic models have been developed for a wide variety of deposit types; the prospector compares geologic characteristics of potential exploration areas with those of deposit models to determine which areas have similar characteristics and are suitable prospecting locations. Mineral prospecting programs are often team efforts, integrating general and site-specific knowledge of geochemistry, geology, geophysics, and remote sensing to ‘‘discover’’ hidden mineral deposits and ‘‘measure’’ their economic potential with increasing accuracy and reduced environmental disturbance. Once a likely target zone has been identified, multiple exploration tools are used in a coordinated program to characterize the deposit and its economic potential.
24. Packet Switching
Historically the first communications networks were telegraphic—the electrical telegraph replacing the mechanical semaphore stations in the mid-nineteenth century. Telegraph networks were largely eclipsed by the advent of the voice (telephone) network, which first appeared in the late nineteenth century, and provided the immediacy of voice conversation. The Public Switched Telephone Network allows a subscriber to dial a connection to another subscriber, with the connection being a series of telephone lines connected together through switches at the telephone exchanges along the route. This technique is known as circuit switching, as a circuit is set up between the subscribers, and is held until the call is cleared.
One of the disadvantages of circuit switching is the fact that the capacity of the link is often significantly underused due to silences in the conversation, but the spare capacity cannot be shared with other traffic. Another disadvantage is the time it takes to establish the connection before the conversation can begin. One could liken this to sending a railway engine from London to Edinburgh to set the points before returning to pick up the carriages. What is required is a compromise between the immediacy of conversation on an established circuit-switched connection, with the ad hoc delivery of a store-and-forward message system. This is what packet switching is designed to provide.
25. Personal Computers
A personal computer, or PC, is designed for personal use. Its central processing unit (CPU) runs single-user systems and application software, processes input from the user, sending output to a variety of peripheral devices. Programs and data are stored in memory and attached storage devices. Personal computers are generally single-user desktop machines, but the term has been applied to any computer that ‘‘stands alone’’ for a single user, including portable computers.
The technology that enabled the construction of personal computers was the microprocessor, a programmable integrated circuit (or ‘‘chip’’) that acts as the CPU. Intel introduced the first microprocessor in 1971, the 4-bit 4004, which it called a ‘‘microprogrammable computer on a chip.’’ The 4004 was originally developed as a general-purpose chip for a programmable calculator, but Intel introduced it as part of Intel’s Microcomputer System 4-bit, or MCS-4, which also included read-only memory (ROM) and random-access memory (RAM) memory chips and a shift register chip. In August 1972, Intel followed with the 8-bit 8008, then the more powerful 8080 in June 1974. Following Intel’s lead, computers based on the 8080 were usually called microcomputers.
The success of the minicomputer during the 1960s prepared computer engineers and users for ‘‘single person, single CPU’’ computers. Digital Equipment Corporation’s (DEC) widely used PDP-10, for example, was smaller, cheaper, and more accessible than large mainframe computers. Timeshared computers operating under operating systems such as TOPS-10 on the PDP-10— co-developed by the Massachusetts Institute of Technology (MIT) and DEC in 1972—created the illusion of individual control of computing power by providing rapid access to personal programs and files. By the early 1970s, the accessibility of minicomputers, advances in microelectronics, and component miniaturization created expectations of affordable personal computers.
26. Printers
Printers generally can be categorized as either impact or nonimpact. Like typewriters, impact printers generate output by striking the page with a solid substance. Impact printers include daisy wheel and dot matrix printers. The daisy wheel printer, which was introduced in 1972 by Diablo Systems, operates by spinning the daisy wheel to the correct character whereupon a hammer strikes it, forcing the character through an inked ribbon and onto the paper. Dot matrix printers operate by using a series of small pins to strike a matrix or grid ribbon coated with ink. The strike of the pin forces the ink to transfer to the paper at the point of impact. Unlike daisy wheel printers, dot matrix printers can generate italic and other character types through producing different pin patterns. Nonimpact printers generate images by spraying or fusing ink to paper or other output media. This category includes inkjet printers, laser printers, and thermal printers. Whether they are inkjet or laser, impact or nonimpact, all modern printers incorporate features of dot matrix technology in their design: they operate by generating dots onto paper or other physical media.
27. Processors for Computers
A processor is the part of the computer system that manipulates the data. The first computer processors of the late 1940s and early 1950s performed three main functions and had three main components. They worked in a cycle to gather, decode, and execute instructions. They were made up of the arithmetic and logic unit, the control unit, and some extra storage components or registers. Today, most processors contain these components and perform these same functions, but since the 1960s they have developed different forms, capabilities, and organization. As with computers in general, increasing speed and decreasing size has marked their development.
28. Radionavigation
Astronomical and dead-reckoning techniques furnished the methods of navigating ships until the twentieth century, when exploitation of radio waves, coupled with electronics, met the needs of aircraft with their fast speeds, but also transformed all navigational techniques. The application of radio to dead reckoning has allowed vessels to determine their positions in all-weather by direction finding (known as radio direction finding, or RDF) or by hyperbolic systems. Another use of radio, radar (radio direction and rangefinding), enables vessels to determine their distance to, or their bearing from, objects of known position. Radionavigation complements traditional navigational methods by employing three frames of reference. First, radio enables a vessel to navigate by lines of bearing to shore transmitters (the most common use of radio). This is directly analogous to the use of lighthouses for bearings. Second, shore stations may take radio bearings of craft and relay to them computed positions. Third, radio beacons provide aircraft or ships with signals that function as true compasses.
29. Software Application Programs
At the beginning of the computer age around the late 1940s, inventors of the intelligent machine were not thinking about applications software, or any software other than that needed to run the bare machine to do mathematical calculating. It was only when Maurice Wilkes’ young protégé David Williams crafted a tidy set of initial orders for the EDSAC, an early programmable digital computer, that users could string together standard subroutines to a program and have the execution jump between them. This was the beginning of software as we know it—something that runs on a machine other than an operating system to make it do anything desired. ‘‘Applications’’ are software other than system programs that run the actual hardware. Manufacturers always had this software, and as the 1950s progressed they would ‘‘bundle’’ applications with hardware to make expensive computers more attractive. Some programming departments were even placed in the marketing departments.
30. Software Engineering
Software engineering aims to develop the programs that allow digital computers to do useful work in a systematic, disciplined manner that produces high-quality software on time and on budget. As computers have spread throughout industrialized societies, software has become a multibillion dollar industry. Both the users and developers of software depend a great deal on the effectiveness of the development process.
Software is a concept that didn’t even pertain to the first electronic digital computers. They were ‘‘programmed’’ through switches and patch cables that physically altered the electrical pathways of the machine. It was not until the Manchester Mark I, the first operational stored-program electronic digital computer, was developed in 1948 at the University of Manchester in England that configuring the machine to solve a specific problem became a matter of software rather than hardware. Subsequently, instructions were stored in memory along with data.
31. Supercomputers
Supercomputers are high-performance computing devices that are generally used for numerical calculation, for the study of physical systems either through numerical simulation or the processing of scientific data. Initially, they were large, expensive, mainframe computers, which were usually owned by government research labs. By the end of the twentieth century, they were more often networks of inexpensive small computers. The common element of all of these machines was their ability to perform high-speed floating-point arithmetic— binary arithmetic that approximates decimal numbers with a fixed number of bits—the basis of numerical computation.
With the advent of inexpensive supercomputers, these machines moved beyond the large government labs and into smaller research and engineering facilities. Some were used for the study of social science. A few were employed by business concerns, such as stock brokerages or graphic designers.
32. Systems Programs
The operating systems used in all computers today are a result of the development and organization of early systems programs designed to control and regulate the operations of computer hardware. The early computing machines such as the ENIAC of 1945 were ‘‘programmed’’ manually with connecting cables and setting switches for each new calculation. With the advent of the stored program computer of the late 1940s (the Manchester Mark I, EDVAC, EDSAC (electronic delay storage automatic calculator), the first system programs such as assemblers and compilers were developed and installed. These programs performed oft repeated and basic operations for computer use including converting programs into machine code, storing and retrieving files, managing computer resources and peripherals, and aiding in the compilation of new programs. With the advent of programming languages, and the dissemination of more computers in research centers, universities, and businesses during the late 1950s and 1960s, a large group of users began developing programs, improving usability, and organizing system programs into operating systems.
The 1970s and 1980s saw a turn away from some of the complications of system software, an interweaving of features from different operating systems, and the development of systems programs for the personal computer. In the early 1970s, two programmers from Bell Laboratories, Ken Thompson and Dennis Ritchie, developed a smaller, simpler operating system called UNIX. Unlike past system software, UNIX was portable and could be run on different computer systems. Due in part to low licensing fees and simplicity of design, UNIX increased in popularity throughout the 1970s. At the Xerox Palo Alto Research Center, research during the 1970s led to the development of system software for the Apple Macintosh computer that included a GUI (graphical user interface). This type of system software filtered the user’s interaction with the computer through the use of graphics or icons representing computer processes. In 1985, a year after the release of the Apple Macintosh computer, a GUI was overlaid on Microsoft’s then dominant operating system, MS-DOS, to produce Microsoft Windows. The Microsoft Windows series of operating systems became and remains the dominant operating system on personal computers.
33. World Wide Web
The World Wide Web (Web) is a ‘‘finite but unbounded’’ collection of media-rich digital resources that are connected through high-speed digital networks. It relies upon an Internet protocol suite that supports cross-platform transmission and makes available a wide variety of media types (i.e., multimedia). The cross-platform delivery environment represents an important departure from more traditional network communications protocols such as e-mail, telnet, and file transfer protocols (FTP) because it is content-centric. It is also to be distinguished from earlier document acquisition systems such as Gopher, which was designed in 1991, originally as a mainframe program but quickly implemented over networks, and wide area information systems (WAIS), also released in 1991. WAIS accommodated a narrower range of media formats and failed to include hyperlinks within their navigation protocols. Following the success of Gopher on the Internet, the Web quickly extended and enriched the metaphor of integrated browsing and navigation. This made it possible to navigate and peruse a wide variety of media types effortlessly on the Web, which in turn led to the Web’s hegemony as an Internet protocol.

History of Computer Technology

The modern computer—the (electronic) digital computer in which the stored program concept is realized and hence self-modifying programs are possible—was only invented in the 1940s. Nevertheless, the history of computing (interpreted as the usage of modern computers) is only understandable against the background of the many forms of information processing as well as mechanical computing devices that solved mathematical problems in the first half of the twentieth century. The part these several predecessors played in the invention and early history of the computer may be interpreted from two different perspectives: on the one hand it can be argued that these machines prepared the way for the modern digital computer, on the other hand it can be argued that the computer, which was invented as a mathematical instrument, was reconstructed to be a data-processing machine, a control mechanism, and a communication tool.
The invention and early history of the digital computer has its roots in two different kinds of developments: first, information processing in business and government bureaucracies; and second, the use and the search for mathematical instruments and methods that could solve mathematical problems arising in the sciences and in engineering.
Origins in Mechanical Office Equipment
The development of information processing in business and government bureaucracies had its origins in the late nineteenth century, which was not just an era of industrialization and mass production but also a time of continuous growth in administrative work. The economic precondition for this development was the creation of a global economy, which caused growth in production of goods and trade. This brought with it an immense increase in correspondence, as well as monitoring and accounting activities—corporate bureaucracies began to collect and process data in increasing quantities. Almost at the same time, government organizations became more and more interested in collating data on population and demographic changes (e.g., expanding tax revenues, social security, and wide-ranging planning and monitoring functions) and analyzing this data statistically.
Bureaucracies in the U.S. and in Europe reacted in a different way to these changes. While in Europe for the most part neither office machines nor telephones entered offices until 1900, in the U.S. in the last quarter of the nineteenth century the information-handling techniques in bureaucracies were radically changed because of the introduction of mechanical devices for writing, copying, and counting data. The rise of big business in the U.S. had caused a growing demand for management control tools, which was fulfilled by a new ideology of systematic management together with the products of the rising office machines industry. Because of a later start in industrialization, the government and businesses in the U.S. were not forced to reorganize their bureaucracies when they introduced office machines. This, together with an ideological preference for modern office equipment, was the cause of a market for office machines and of a far-reaching mechanization of office work in the U.S. In the 1880s typewriters and cash registers became very widespread, followed by adding machines and book-keeping machines in the 1890s. From 1880 onward, the makers of office machines in the U.S. underwent a period of enormous growth, and in 1920 the office machine industry annually generated about $200 million in revenue. In Europe, by comparison, mechanization of office work emerged about two decades later than in the U.S.—both Germany and Britain adopted the American system of office organization and extensive use of office machines for the most part no earlier than the 1920s.
During the same period the rise of a new office machine technology began. Punched card systems, initially invented by Herman Hollerith to analyze the U.S. census in 1890, were introduced. By 1911 Hollerith’s company had only about 100 customers, but after it had been merged in the same year with two other companies to become the Computing- Tabulating-Recording Company (CTR), it began a tremendous ascent to become the world leader in the office machine industry. CTR’s general manager, Thomas J. Watson, understood the extraordinary potential of these punched-card accounting devices, which enabled their users to process enormous amounts of data largely automatically, in a rapid way and at an adequate level of cost and effort. Due to Watson’s insights and his extraordinary management abilities, the company (which had since been renamed to International Business Machines (IBM)) became the fourth largest office machine supplier in the world by 1928—topped only by Remington Rand, National Cash Register (NCR), and the Burroughs Adding Machine Company.
Origin of Calculating Devices and Analog Instruments
Compared with the fundamental changes in the world of corporate and government bureaucracies caused by office machinery during the late nineteenth and early twentieth century, calculating machines and instruments seemed to have only a minor influence in the world of science and engineering. Scientists and engineers had always been confronted with mathematical problems and had over the centuries developed techniques such as mathematical tables. However, many new mathematical instruments emerged in the nineteenth century and increasingly began to change the world of science and engineering. Apart from the slide rule, which came into popular use in Europe from the early nineteenth century onwards (and became the symbol of the engineer for decades), calculating machines and instruments were only produced on a large scale in the middle of the nineteenth century.
In the 1850s the production of calculating machines as well as that of planimeters (used to measure the area of closed curves, a typical problem in land surveying) started on different scales. Worldwide, less than 2,000 calculating machines were produced before 1880, but more than 10,000 planimeters were produced by the early 1880s. Also, various types of specialized mathematical analog instruments were produced on a very small scale in the late nineteenth century; among them were integraphs for the graphical solution of special types of differential equations, harmonic analyzers for the determination of Fourier coefficients of a periodic function, and tide predictors that could calculate the time and height of the ebb and flood tides.
Nonetheless, in 1900 only geodesists and astronomers (as well as part of the engineering community) made extensive use of mathematical instruments. In addition, the establishment of applied mathematics as a new discipline took place at German universities on a small scale and the use of apparatus and machines as well as graphical and numerical methods began to flourish during this time. After World War I, the development of engineering sciences and of technical physics gave a tremendous boost to applied mathematics in Germany and Britain. In general, scientists and engineers became more aware of the capabilities of calculating machines and a change of the calculating culture—from the use of tables to the use of calculating machines—took place.
One particular problem that was increasingly encountered by mechanical and electrical engineers in the 1920s was the solution of several types of differential equations, which were not solvable by analytic solutions. As one important result of this development, a new type of analog instrument— the so called ‘‘differential analyzer’’—was invented in 1931 by the engineer Vannevar Bush at the Massachusetts Institute of Technology (MIT). In contrast to its predecessors—several types of integraphs—this machine (which was later called an analog computer) could be used not only to solve a special class of differential equation, but a more general class of differential equations associated with engineering problems. Before the digital computer was invented in the 1940s there was an intensive use of analog instruments (similar to Bush’s differential analyzer) and a number of machines were constructed in the U.S. and in Europe after the model of Bush’s machine before and during World War II. Analog instruments also became increasingly important in several fields such as the firing control of artillery on warships or the control of rockets. It is worth mentioning here that only for a limited class of scientific and engineering problems was it possible to construct an analog computer— weather forecasting and the problem of shock waves produced by an atomic bomb, for example, required the solution of partial differential equations, for which a digital computer was needed.
The Invention of the Computer
The invention of the electronic digital stored-program computer is directly connected with the development of numerical calculation tools for the solution of mathematical problems in the sciences and in engineering. The ideas that led to the invention of the computer were developed simultaneously by scientists and engineers in Germany, Britain, and the U.S. in the 1930s and 1940s. The first freely programmable program-controlled automatic calculator was developed by the civil engineering student Konrad Zuse in Germany. Zuse started development work on program-controlled computing machines in the 1930s, when he had to deal with extensive calculations in static, and in 1941 his Z3, which was based on electromechanical relay technology, became operational.
Several similar developments in the U.S. were in progress at the same time. In 1937 Howard Aiken, a physics student at Harvard University, approached IBM to build a program-controlled calculator— later called the ‘‘Harvard Mark I.’’ On the basis of a concept Aiken had developed because of his experiences with the numerical solution of partial differential equations, the machine was built and became operational in 1944. At almost the same time a series of important relay computers was built at the Bell Laboratories in New York following a suggestion by George R. Stibitz. All these developments in the U.S. were spurred by the outbreak of World War II. The first large-scale programmable electronic computer called the Colossus was built in complete secrecy in 1943 to 1944 at Bletchley Park in Britain in order to help break the German Enigma machine ciphers.
However, it was neither these relay calculators nor the Colossus that were decisive for the development of the universal computer, but the ENIAC (electronic numerical integrator and computer), which was developed at the Moore School of Engineering at the University of Pennsylvania. Extensive ballistic calculations were carried out there for the U.S. Army during World War II with the aid of the Bush ‘‘differential analyzer’’ and more than a hundred women (‘‘computors’’) working on mechanical desk calculators. Observing that capacity was barely sufficient to compute the artillery firing tables, the physicist John W. Mauchly and the electronic engineer John Presper Eckert started developing the ENIAC, a digital version of the differential analyzer, in 1943 with funding from the U.S. Army.
In 1944 the mathematician John von Neumann turned his attention to the ENIAC because of his mathematical work on the Manhattan Project (on the implosion of the hydrogen bomb). While the ENIAC was being built, Neumann and the ENIAC team drew up plans for a successor to the ENIAC in order to improve the shortcomings of the ENIAC concept, such as the very small memory and the time-consuming reprogramming (actually rewiring) required to change the setup for a new calculation. In these meetings the idea of a stored-program, universal machine evolved. Memory was to be used to store the program in addition to data. This would enable the machine to execute conditional branches and change the flow of the program. The concept of a computer in the modern sense of the word was born and in 1945 von Neumann wrote the important ‘‘First Draft of a Report on the EDVAC,’’ which described the stored-program, universal computer. The logical structure that was presented in this draft report is now referred to as the ‘‘von Neumann architecture.’’ This EDVAC report was originally intended for internal use but once made freely available it became the ‘‘bible’’ for computer pioneers throughout the world in the 1940s and 1950s. The first computer featuring the von Neumann architecture operated at Cambridge University in the U.K.; in June 1949 the EDSAC (electronic delay storage automatic computer) computer built by Maurice Wilkes—designed according to the EDVAC principles—became operational.
The Computer as a Scientific Instrument
As soon as the computer was invented, a growing demand for computers by scientists and engineers evolved, and numerous American and European universities started their own computer projects in the 1940s and 1950s. After the technical difficulties of building an electronic computer were solved, scientists grasped the opportunity to use the new scientific instrument for their research. For example, at the University of Gottingen in Germany, the early computers were used for the initial value problems of partial differential equations associated with hydrodynamic problems from atomic physics and aerodynamics. Another striking example was the application of von Neumann’s computer at the Institute for Advanced Study (IAS) in Princeton to numerical weather forecasts in 1950. As a result, numerical weather forecasts could be made on a regular basis from the mid-1950s onwards.
Mathematical methods have always been of a certain importance for science and engineering sciences, but only the use of the electronic digital computer (as an enabling technology) made it possible to broaden the application of mathematical methods to such a degree that research in science, medicine, and engineering without computer- based mathematical methods has become virtually inconceivable at the end of the twentieth century. A number of additional computer-based techniques, such as scientific visualization, medical imaging, computerized tomography, pattern recognition, image processing, and statistical applications, have become of the utmost significance for science, medicine, engineering, and social sciences. In addition, the computer changed the way engineers construct technical artifacts fundamentally because of the use of computer-based methods such as computer-aided design (CAD), computer-aided manufacture (CAM), computer-aided engineering, control applications, and finite-element methods. However, the most striking example seems to be the development of scientific computing and computer modeling, which became accepted as a third mode of scientific research that complements experimentation and theoretical analysis. Scientific computing and computer modeling are based on supercomputers as the enabling technology, which became important tools for modern science routinely used to simulate physical and chemical phenomena. These high-speed computers became equated with the machines developed by Seymour Cray, who built the fastest computers in the world for many years. The supercomputers he launched such as the legendary CRAY I from 1976 were the basis for computer modeling of real world systems, and helped, for example, the defense industry in the U.S. to build weapons systems and the oil industry to create geological models that show potential oil deposits.
Growth of Digital Computers in Business and Information Processing
When the digital computer was invented as a mathematical instrument in the 1940s, it could not have been foreseen that this new artifact would ever be of a certain importance in the business world. About 50 firms entered the computer business worldwide in the late 1940s and the early 1950s, and the computer was reconstructed to be a type of electronic data-processing machine that took the place of punched-card technology as well as other office machine technology. It is interesting to consider that there were mainly three types of companies building computers in the 1950s and 1960s: newly created computer firms (such as the company founded by the ENIAC inventors Eckert and Mauchly), electronics and control equipments firms (such as RCA and General Electric), and office appliance companies (such as Burroughs and NCR). Despite the fact that the first digital computers were put on the market by a German and a British company, U.S. firms dominated the world market from the 1950s onward, as these firms had the biggest market as well as financial support from the government.
Generally speaking, the Cold War exerted an enormous influence on the development of computer technology. Until the early 1960s the U.S. military and the defense industry were the central drivers of the digital computer expansion, serving as the main market for computer technology and shaping and speeding up the formation of the rising computer industry. Because of the U.S. military’s role as the ‘‘tester’’ for prototype hard- and software, it had a direct and lasting influence on technological developments; in addition, it has to be noted that the spread of computer technology was partly hindered by military secrecy. Even after the emergence of a large civilian computer market in the 1960s, the U.S. military maintained its influence by investing a great deal in computer in hard- and software and in computer research projects.
From the middle of the 1950s onwards the world computer market was dominated by IBM, which accounted for more than 70 percent of the computer industry revenues until the mid-1970s. The reasons for IBM’s overwhelming success were diverse, but the company had a unique combination of technical and organizational capabilities at its disposal that prepared it perfectly for the mainframe computer market. In addition, IBM benefited from enormous government contracts, which helped to develop excellence in computer technology and design. However, the greatest advantage of IBM was by no doubt its marketing organization and its reputation as a service-oriented firm, which was used to working closely with customers to adapt machinery to address specific problems, and this key difference between IBM and its competitors persisted right into the computer age.
During the late 1950s and early 1960s, the computer market—consisting of IBM and seven other companies called the ‘‘seven dwarves’’—was dominated by IBM, with its 650 and 1401 computers. By 1960 the market for computers was still small. Only about 7,000 computers had been delivered by the computer industry, and at this time even IBM was primarily a punched-card machine supplier, which was still the major source of its income. Only in 1960 did a boom in demand for computers start, and by 1970 the number of computers installed worldwide had increased to more than 100,000. The computer industry was on the track to become one of the world’s major industries, and was totally dominated by IBM.
The outstanding computer system of this period was IBM’s System/360. It was announced in 1964 as a compatible family of the same computer architecture, and employed interchangeable peripheral devices in order to solve IBM’s problems with a hotchpotch of incompatible product lines (which had evoked large problems in the development and maintenance of a great deal of different hardware and software products). Despite the fact that neither the technology used nor the systems programming were of a high-tech technology at the time, the System/360 established a new standard for mainframe computers for decades. Various computer firms in the U.S., Europe, Japan and even Russia, concentrated on copying components, peripherals for System/360 or tried to build System/360-compatible computers.
The growth of the computer market during the 1960s was accompanied by market shakeouts: two of the ‘‘seven dwarves’’ left the computer business after the first computer recession in the early 1970s, and afterwards the computer market was controlled by IBM and BUNCH (Burroughs, UNIVAC, NCR, Control Data, and Honeywell). At the same time, an internationalization of the computer market took place—U.S. companies controlled the world market for computers— which caused considerable fears over loss of national independence in European and Japanese national governments, and these subsequently stirred up national computing programs. While the European attempts to create national champions as well as the more general attempt to create a European-wide market for mainframe computers failed in the end, Japan’s attempt to found a national computer industry has been successful: Until today Japan is the only nation able to compete with the U.S. in a wide array of high-tech computer-related products.
Real-Time and Time-Sharing
Until the 1960s almost all computers in government and business were running batch-processing applications (i.e., the computers were only used in the same way as the punched-card accounting machines they had replaced). In the early 1950s, however, the computer industry introduced a new mode of computing named ‘‘real-time’’ in the business sector for the first time, which was originally developed for military purposes in MIT’s Whirlwind project. This project was initially started in World War II with the aim of designing an aircraft simulator by analog methods, and later became a part of a research and development program for the gigantic, computerized anti-aircraft defense system SAGE (semi-automatic ground environment) built up by IBM in the 1950s.
The demand for this new mode of computing was created by cultural and structural changes in economy. The increasing number of financial transactions in banks and insurance companies as well as increasing airline traveling activities made necessary new computer-based information systems that led finally to new forms of business evolution through information technology.
The case of the first computerized airline reservation system SABRE, developed for American Airlines by IBM in the 1950s and finally implemented in the early 1960s, serves to thoroughly illustrate these structural and structural changes in economy. Until the early 1950s, airline reservations had been made manually without any problems, but by 1953 this system was in crisis because increased air traffic and growing flight plan complexity had made reservation costs insupportable. SABRE became a complete success, demonstrating the potential of centralized real-time computing systems connected via a network. The system enabled flight agents throughout the U.S., who were equipped with desktop terminals, to gain a direct, real-time access to the central reservation system based on central IBM mainframe computers, while the airline was able to assign appropriate resources in response. Therefore, an effective combination of advantages was offered by SABRE—a better utilization of resources and a much higher customer convenience.
Very soon this new mode of computing spread around the business and government world and became commonplace throughout the service and distribution sectors of the economy; for example, bank tellers and insurance account representatives increasingly worked at terminals. On the one hand structural information problems led managers to go this way, and on the other hand the increasing use of computers as information handling machines in government and business had brought about the idea of computer-based accessible data retrieval. In the end, more and more IBM customers wanted to link dozens of operators directly to central computers by using terminal keyboards and display screens.
In the late 1950s and early 1960s—at the same time that IBM and American Airlines had begun the development of the SABRE airline reservation system—a group of brilliant computer scientists had a new idea for computer usage named ‘‘time sharing.’’ Instead of dedicating a multi-terminal system solely to a single application, they had the computer utility vision of organizing a mainframe computer so that several users could interact with it simultaneously. This vision was to change the nature of computing profoundly, because computing was no longer provided to naive users by programmers and systems analysts, and by the late 1960s time-sharing computers became widespread in the U.S.
Particularly important for this development had been the work of J.C.R. Licklider of the Advanced Research Project Agency (ARPA) of the U.S. Department of Defense. In 1960 Licklider had published a now-classic paper ‘‘Man–Computer Symbiosis’’ proposing the use of computers to augment human intellect and creating the vision of interactive computing. Licklider was very successful in translating his idea of a network allowing people on different computers to communicate into action, and convinced ARPA to start an enormous research program in 1962. Its budget surpassed that of all other sources of U.S. public research funding for computers combined. The ARPA research programs resulted in a series of fundamental moves forward in computer technology in areas such as computer graphics, artificial intelligence, and operating systems. For example, even the most influential current operating system, the general-purpose time-sharing system Unix, developed in the early 1970s at the Bell Laboratories, was a spin-off of an ambitious operating system project, Multics, funded by ARPA. The designers of Unix successfully attempted to keep away from complexity by using a clear, minimalist design approach to software design, and created a multitasking, multiuser operating system, which became the standard operating system in the 1980s.
Electronic Component Revolution
While the nature of business computing was changed by the new paradigms such as real time and time sharing, advances in solid-state components increasingly became a driving force for fundamental changes in the computer industry, and led to a dynamic interplay between new computer designs and new programming techniques that resulted in a remarkable series of technical developments. The technical progress of the mainframe computer had always run parallel to conversions in the electronics components. During the period from 1945 to 1965, two fundamental transformations in the electronics industry took place that were marked by the invention of the transistor in 1947 and the integrated circuit in 1957 to 1958. While the first generation of computers—lasting until about 1960—was characterized by vacuum tubes (valves) for switching elements, the second generation used the much smaller and more reliable transistors, which could be produced at a lower price. A new phase was inaugurated when an entire integrated circuit on a chip of silicon was produced in 1961, and when the first integrated circuits were produced for the military in 1962. A remarkable pace of progress in semiconductor innovations, known as the ‘‘revolution in miniature,’’ began to speed up the computer industry. The third generation of computers characterized by the use of integrated circuits began with the announcement of the IBM System/360 in 1964 (although this computer system did not use true integrated circuits). The most important effect of the introduction of integrated circuits was not to strengthen the leading mainframe computer systems, but to destroy Grosch’s Law, which stated that computing power increases as the square of its costs. In fact, the cost of computer power dramatically reduced during the next ten years.
This became clear with the introduction of the first computer to use integrated circuits on a full scale in 1965: the Digital Equipment Corporation (DEC) offered its PDP-8 computer for just $18,000, creating a new class of computers called minicomputers—small in size and low in cost—as well as opening up the market to new customers. Minicomputers were mainly used in areas other than general-purpose computing such as industrial applications and interactive graphics systems. The PDP-8 became the first widely successful minicomputer with over 50,000 items sold, demonstrating that there was a market for smaller computers. This success of DEC (by 1970 it had become the world’s third largest computer manufacturer) was supported by dramatic advances in solid-state technology. During the 1960s the number of transistors on a chip doubled every two years, and as a result minicomputers became continuously more powerful and more inexpensive at an inconceivable speed.
Personal Computing
The most striking aspect of the consequences of the exponential increase of the number of transistors on a chip during the 1960s—as stated by ‘‘Moore’s Law’’: the number of transistors on a chip doubled every two years—was not the lowering of the costs of mainframe computer and minicomputer processing and storage, but the introduction of the first consumer products based on chip technology such as hand-held calculators and digital watches in about 1970. More specifically, the market acts in these industries were changed overnight by the shift from mechanical to chip technology, which led to an enormous deterioration in prices as well as a dramatic industry shakeout. These episodes only marked the beginning of wide-ranging changes in economy and society during the last quarter of the twentieth century leading to a new situation where chips played an essential role in almost every part of business and modern life.
The case of the invention of the personal computer serves to illustrate that it was not sufficient to develop the microprocessor as the enabling technology in order to create a new invention, but how much new technologies can be socially constructed by cultural factors and commercial interests. When the microprocessor, a single-chip integrated circuit implementation of a CPU, was launched by the semiconductor company Intel in 1971, there was no hindrance to producing a reasonably priced microcomputer, but it took six years until the consumer product PC emerged. None of the traditional mainframe and minicomputer companies were involved in creating the early personal computer. Instead, a group of computer hobbyists as well as the ‘‘computer liberation’’ movement in the U.S. became the driving force behind the invention of the PC. These two groups were desperately keen on a low-priced type of minicomputer for use at home for leisure activities such as computer games; or rather they had the counterculture vision of an unreservedly available and personal access to an inexpensive computer utility provided with rich information. When in 1975 the Altair 8800, an Intel 8080 microprocessor-based computer, was offered as an electronic hobbyist kit for less than $400, these two groups began to realize their vision of a ‘‘personal computer.’’ Very soon dozens of computer clubs and computer magazines were founded around the U.S., and these computer enthusiasts created the personal computer by combining the Altair with keyboards, disk drives, and monitors as well as by developing standard software for it. Consequently, in only two years, a more or less useless hobbyist kit had been changed into a computer that could easily be transformed in a consumer product.
The computer hobbyist period ended in 1977, when the first standard machines for an emerging consumer product mass market were sold. These included products such as the Commodore Pet and the Apple II, which included its own monitor, disk drive, and keyboard, and was provided with several basic software packages. Over next three years, spreadsheet, word processing, and database software were developed, and an immense market for games software evolved. As a result, personal computers became more and more a consumer product for ordinary people, and Apple’s revenues shot to more than $500 million in 1982. By 1980, the personal computer had transformed into a business machine, and IBM decided to develop its own personal computer, which was introduced as the IBM PC in 1981. It became an overwhelming success and set a new industry standard.
Apple tried to compete by launching their new Macintosh computer in 1984 provided with a revolutionary graphical user interface (GUI), which set a new standard for a user-friendly human–computer interaction. It was based on technology created by computer scientists at the Xerox Palo Alto Research Center in California, who had picked up on ideas about human– computer interaction developed at the Stanford Research Institute and at the University of Utah. Despite the fact that the Macintosh’s GUI was far superior to the MS-DOS operating system of the IBM-compatible PCs, Apple failed to win the business market and remained a niche player with a market share of about 10 percent. The PC main branch was determined by the companies IBM had chosen as its original suppliers in 1981 for the design of the microprocessor (Intel) and the operating system (Microsoft). While IBM failed to seize power in the operating system software market for PCs in a software war with Microsoft, Microsoft achieved dominance not only of the key market for PC operating systems, but also the key market of office applications during the first half of the 1990s.
In the early 1990s computing again underwent further fundamental changes with the appearance of the Internet, and for the most computer users, networking became an integral part of what it means to have a computer. Furthermore, the rise of the Internet indicated the impending arrival of a new ‘‘information infrastructure’’ as well as of a ‘‘digital convergence,’’ as the coupling of computers and communications networks was often called.
In addition, the 1990s were a period of an information technology boom, which was mainly based on the Internet hype. For many years previously, it seemed to a great deal of managers and journalists that the Internet would become not just an indispensable business tool, but also a miracle cure for economic growth and prosperity. In addition, computer scientists and sociologists started a discussion predicting the beginning of a new ‘‘information age’’ based on the Internet as a ‘‘technological revolution’’ and reshaping the ‘‘material basis’’ of industrial societies.
The Internet was the outcome of an unusual collaboration of a military–industrial–academic complex that promoted the development of this extraordinary innovation. It grew out of a military network called the ARPAnet, a project established and funded by ARPA in the 1960s. The ARPAnet was initially devoted to support of data communications for defense research projects and was only used by a small number of researchers in the 1970s. Its further development was primarily promoted by unintentional forms of network usage. The users of the ARPAnet became very much attracted by the opportunity for communicating through electronic mail, which rapidly surpassed all other forms of network activities. Another unplanned spin-off of the ARPAnet was the Usenet (Unix User Network), which started in 1979 as a link between two universities and enabled its users to subscribe to newsgroups. Electronic mail became a driving force for the creation of a large number of new proprietary networks funded by the existing computer services industry or by organizations such as the NSF (NSFnet). Because networks users’ desire for email to be able to cross network boundaries, an ARPA project on ‘‘internetworking’’ became the origin for the ‘‘Internet’’—a network of networks linked by several layers of protocols such as TCP/IP (transmission control protocol/internet protocol), which quickly developed into the actual standard.
Only after the government funding had solved many of the most essential technical issues and had shaped a number of the most characteristic features of the Internet, did private sector entrepreneurs start Internet-related ventures and quickly developed user-oriented enhancements. Nevertheless, the Internet did not make a promising start and it took more than ten years before significant numbers of networks were connected. In 1980, the Internet had less than two hundred hosts, and during the next four years the number of hosts went up only to 1000. Only when the Internet reached the educational and business community of PC users in the late 1980s, did it start to become an important economic and social phenomenon. The number of hosts began an explosive growth in the late 1980s—by 1988 there were over 50,000 hosts. An important and unforeseen side effect of this development became the creation of the Internet into a new electronic publishing medium. The electronic publishing development that excited most interest in the Internet was the World Wide Web, originally developed at the CERN High Energy Physics Laboratory in Geneva in 1989. Soon there were millions of documents on the Internet, and private PC users became excited by the joys of surfing the Internet. A number of firms such as AOL soon provided low-cost network access and a range of consumer-oriented information services. The Internet boom was also helped by the Clinton–Gore presidential election campaign on the ‘‘information superhighway’’ and by the amazing news reporting on the national information infrastructure in the early 1990s. Nevertheless, for many observers it was astounding how fast the number of hosts on the Internet increased during the next few years—from more than 1 million in 1992 to 72 million in 1999.
The overwhelming success of the PC and of the Internet tends to hide the fact that its arrival marked only a branching in computer history and not a sequence. (Take, for example, the case of mainframe computers, which still continue to run, being of great importance to government facilities and the private sector (such as banks and insurance companies), or the case of supercomputers, being of the utmost significance for modern science and engineering.) Furthermore, it should be noted that only a small part of the computer applications performed today is easily observable—98 percent of programmable CPUs are used in embedded systems such as automobiles, medical devices, washing machines and mobile telephones.
Browse other Technology Research Paper Topics .
ORDER HIGH QUALITY CUSTOM PAPER

224 Research Topics on Technology & Computer Science
Are you new to the world of technology? Do you need topics related to technology to write about? No worries, Custom-writing.org experts are here to help! In this article, we offer you a multitude of creative and interesting technology topics from various research areas, including information technology and computer science. So, let’s start!
Our specialists will write a custom essay on any topic for $13.00 $10.40/page
- 🔝 Top 10 Topics
👋 Introduction
- 💾 Top 10 Computer Science Topics
⚙ Artificial Intelligence
💉 biotechnology, 📡 communications and media.
- 💻Computer Science & Engineering
🔋 Energy & Power Technologies
🍗 food technology, 😷 medical devices & diagnostics, 💊 pharmaceutical technologies.
- 🚈 Transportation
✋ Conclusion
🔍 references, 🔝 top 10 technology topics.
- The difference between VR and AR
- Is genetic engineering ethical?
- Can digital books replace print ones?
- The impact of virtual reality on education
- 5 major fields of robotics
- The risks and dangers of biometrics
- Nanotechnology in medicine
- Digital technology’s impact on globalization
- Is proprietary software less secure than open-source?
- The difference between deep learning and machine learning
Is it a good thing that technologies and computer science are developing so fast? No one knows for sure. There are too many different opinions, and some of them are quite radical! However, we know that technologies have changed our world once and forever. Computer science affects every single area of people’s lives.

Just think about Netflix . Can you imagine that 23 years ago it didn’t exist? How did people live without it? Well, in 2023, the entertainment field has gone so far that you can travel anywhere while sitting in your room. All you would have to do is just order a VR (virtual reality) headset. Moreover, personal computers give an unlimited flow of information, which has changed the entire education system.
Every day, technologies become smarter and smaller. A smartphone in your pocket may be as powerful as your laptop. No doubt, the development of computer science builds our future. It is hard to count how many research areas in technologies and computer science are there. But it is not hard to name the most important of them.
Artificial intelligence tops the charts, of course. However, engineering and biotechnology are not far behind. Communications and media are developing super fast as well. The research is also done in areas that make our lives better and more comfortable. The list of them includes transport, food and energy, medical, and pharmaceutical areas.
So check out our list of 204 most relevant computer science research topics below. Maybe one of them will inspire you to do revolutionary research!
💾 Top 10 Computer Science Research Topics
💡 technologies & computer science: research ideas.
Many people probably picture robots from the movie “I, Robot” when they hear about artificial intelligence. However, it is far from the truth.
AI is meant to be as close to a rational way of thinking as possible. It uses binary logic (just like computers) to help solve problems in many areas. Applied AI is only aimed at one task. A generalized AI branch is looking into a human-like machine that can learn to do anything.

Applied AI already helps researchers in quantum physics and medicine. You deal with AI every day when online shops suggest some items based on your previous purchases. Siri and self-driving cars are also examples of applied AI.
Generalized AI is supposed to be a copy of multitasking human intelligence. However, it is still in the stage of development. Computer technology has yet to reach the level necessary for its creation.
One of the latest trends in this area is improving healthcare management. It is done through the digitalization of all the information in hospitals and even helping diagnose the patients.
Receive a plagiarism-free paper tailored to your instructions.
Also, privacy issues and facial recognition technologies are being researched. For example, some governments collect biometric data to reduce and even predict crime.
Research Topics on Artificial Intelligence Technology
Since AI development is exceptionally relevant nowadays, it would be smart to invest your time and effort into researching it. Here are some ideas on artificial intelligence research topics that you can look into:
- What areas of life machine learning are the most influential?
- How to choose the right algorithm for machine learning?
- Supervised vs. unsupervised machine learning: compare & contrast
- Reinforcement machine learning algorithms
- Deep learning as a subset of machine learning
- Deep learning & artificial neural networks
- How do artificial neural networks work?
- A comparison of model-free & model-based reinforcement learning algorithms
- Reinforcement learning: single vs. multi-agent
- How do social robots interact with humans?
- Robotics in NASA
- Natural language processing: chatbots
- How does natural language processing produce natural language?
- Natural language processing vs. machine learning
- Artificial intelligence in computer vision
- Computer vision application: autonomous cars
- Recommender systems’ approaches
- Recommender systems: content-based recommendation vs. collaborative filtering
- Internet of things & artificial intelligence: the interconnection
- How much data do the Internet of things devices generate?
Biotechnology uses living organisms to modify different products. Even the simple thing as baking bread is a process of biotechnology. However, nowadays, this area went as far as changing the organisms’ DNA. Genetics and biochemistry are also a part of the biotechnology area.
The development of this area allows people to cure diseases with the help of new medicines. In agriculture, more and more research is done on biological treatment and modifying plants. Biotechnology is even involved in the production of our groceries, household chemicals, and textiles.

There are many exciting trends in biotechnology now that carry the potential of changing our world! For example, scientists are working on creating personalized drugs. This is feasible once they apply computer science to analyze people’s DNA.
Just $13.00 $10.40/page , and you can get an custom-written academic paper according to your instructions
Also, thanks to using new technologies, doctors can collect exact data and provide the patients with correct diagnosis and treatment. Now, you don’t even need to leave your place to get a doctor’s check-up. Just use telehealth!
Data management is developing in the biotechnology area as well. Thanks to that, doctors and scientists can store and access a tremendous amount of information.
The most exciting is the fact that new technology enables specialists to assess genetic information to treat and prevent illnesses! It may solve the problem of some diseases that were considered untreatable before.
Research Topics on Biotechnology
You can use the following examples of research questions on biotechnology for presentation or even a PhD paper! Here is a wide range of topics on biotechnology and its relation to agriculture, nanotechnology, and many more:
- Self-sufficient protein supply and biotechnology in farming
- Evaporation vs. evapotranspiration
- DNA cloning and a southern blot
- Pharmacogenetics & personalized drugs
- Is cloning “playing God”?
- Pharmacogenetics: cancer medicines
- How much can we control our genetics, at what point do we cease to be human?
- Bio ethics and stem cell research
- Genetic engineering: gene therapy
- The potential benefits of genetic engineering
- Genetic engineering: dangers and opportunities
- Mycobacterium tuberculosis: counting the proteins
- Plant genetic enhancement: developing resistance to scarcity
- Y-chromosome genotyping: the case of South Africa
- Agricultural biotechnology: GMO crops
- How are new vaccines developed?
- Nanotechnology in treating HIV
- Allergenic potential & biotechnology
- Whole-genome sequencing in biotechnology
- Genes in heavy metal tolerance: an overview
- Food biotechnology & food-borne illnesses
- How to eliminate heat-resistant microorganisms with ultraviolet?
- High-throughput screening & biotechnology
- How do new food processing technologies affect bacteria related to Aspalathus Linearis?
- Is sweet sorghum suitable for the production of bioethanol in Africa?
- How can pesticides help to diagnose cancer?
- How is embelin used to prevent cancer?
One of the first areas that technologies affected was communications and media. People from the last century couldn’t have imagined how easy it would be to get connected with anyone! Internet connection starts appearing even in the most remote places.
Nowadays, media is used not only for social interaction but for business development and educational purposes as well. You can now start an entirely online business or use special tools to promote the existing one. Also, many leading universities offer online degrees.
In communications and media, AI has been playing the role of enhancement recently. The technology helps create personalized content for always demanding consumers.
Developing media also create numerous job opportunities. For instance, recently, an influencer has become a trending career. Influencers always use the most relevant communication tools available. At the moment, live videos and podcasting are on the top.
Now, you just need to reach your smartphone to access all the opportunities mentioned above! You can apply for a college, find a job, or reach out to all your followers online. It is hard to imagine how far communication and media can go…
Communications and Media Technology Research Topics
There are quite a few simple yet exciting ideas for media and communications technology research topics. Hopefully, you will find THE ONE amongst these Information and Communications Technology (ICT) research proposal topics:
- New media: the importance of ethics in the process of communication
- The development of communication via computer over the last decade
- How have social media changed communication?
- Media during the disasters: increasing panic or helping reduce it?
- Authorities’ media representations in different countries: compare & contrast
- Do people start preferring newspapers to new media again?
- How has the Internet changed media?
- Communication networks
- The impact of social media on super bowl ads
- Communications: technology and personal contact
- New content marketing ideas
- Media exposure and its influence on adolescents
- The impact of mass media on personal socialization
- Internet and interactive media as an advertising tool
- Music marketing in a digital world
- How do people use hype in the media?
- Psychology of videoblog communication
- Media & the freedom of speech
- Is it possible to build trustful relationships in virtual communication?
- How to maintain privacy in social media?
- Communication technologies & cyberbullying
- How has the interpersonal communication changed with the invention of computers?
- The future of the communication technologies
- Yellow journalism in new media
- How enterprises use ICT to get a competitive advantage?
- Healthcare and ICT
- Can we live without mass media?
- Mass media and morality in the 21st century
💻 Computer Science & Engineering
If you have ever wondered how computers work, you better ask a professional in computer science and engineering. This major combines two different, yet interconnected, worlds of machines.
Computer science takes care of the computer’s brain. It usually includes areas of study, such as programming languages and algorithms. Scientists also recognize three paradigms in terms of the computer science field.
For the rationalist paradigm, computer science is a part of math. The technocratic paradigm is focused on software engineering, while the scientific one is all about natural sciences. Interestingly enough, the latter can also be found in the area of artificial intelligence!

On the other hand, computer engineering maintains a computer’s body – hardware and software. It relies quite heavily on electrical engineering. And only the combination of computer science and engineering gives a full understanding of the machine.
If talking about trends and innovations, artificial intelligence development is probably the main one in the area of computer science technology. Big data is the field that has been extremely popular in recent years.
Cybersecurity is and will be one of the leading research fields in our Information Age. The latest trend in computer science and engineering is also virtual reality.
Computer Science Research Topics
If you want to find a good idea for your thesis or you are just preparing for a speech, check out this list of research topics in computer science and engineering:
- How are virtual reality & human perception connected?
- The future of computer-assisted education
- Computer science & high-dimensional data modeling
- Computer science: imperative vs. declarative languages
- The use of blockchain and AI for algorithmic regulations
- Banking industry & blockchain technology
- How does the machine architecture affect the efficiency of code?
- Languages for parallel computing
- How is mesh generation used for computational domains?
- Ways of persistent data structure optimization
- Sensor networks vs. cyber-physical system
- The development of computer graphics: non-photorealistic rendering case
- The development of the systems programming languages
- Game theory & network economics
- How can computational thinking affect science?
- Theoretical computer science in functional analysis
- The most efficient cryptographic protocols
- Software security types: an overview
- Is it possible to eliminate phishing?
- Floating point & programming language
Without energy, no technological progress is possible. Scientists are continually working on improving energy and power technologies. Recently, efforts have been aimed at three main areas.
Developing new batteries and fuel types helps create less expensive ways of storing energy. For example, fuel cells can be used for passenger buses. They need to be connected to a source of fuel to work. However, it guarantees the constant production of electricity as long as they have fuel.
One of the potential trends of the next years is hydrogen energy storage. This method is still in the stage of development. It would allow the use of hydrogen instead of electricity.

A smart grid is another area that uses information technology for the most efficient use of energy. For instance, the first-generation smart grid tracks the movement of electric energy on the go and sends the information back. It is a great way to correct the consumption of energy in real-time. More development is also done on the issue of electricity generation. It aims at technologies that can produce power from the sources that haven’t been used. The trends in this area include second-generation biofuels and photovoltaic glass.
Energy Technologies Research Topics
Since humanity cannot be using fossil fuels forever, the research in the area of energy can be extremely fruitful. The following list of energy and power technology research paper topics can give you an idea of where to dig:
- How can fuel cells be used for stationary power generation?
- Lithium-ion vs. lithium-air batteries: energy density
- Are lithium-air batteries better than gasoline?
- Renewable energy usage: advantages and disadvantages
- The nuclear power usage in the UAE
- India’s solar installations
- Gas price increasing and alternative energy sources
- How can methods of energy transformation be applied with hydrogen energy?
- Is hydrogen energy our future?
- Thermal storage & AC systems
- How to load balance using smart grid?
- Distributed energy generation to optimize power waste
- Is the smart energy network a solution to climate change?
- The future of the tidal power
- The possibility of 3D printing of micro stirling engines
- How can robots be used to adjust solar panels to weather?
- Advanced biofuels & algae
- Can photovoltaic glass be fully transparent?
- Third-generation biofuels: algae vs. crop-based
- Space-based solar power: myth or reality of the future?
- Can smaller nuclear reactors be more efficient?
- Inertial confinement fusion & creal energy
- Renewable energy technologies: an overview
- How can thorium change the nuclear power field?
The way we get our food has changed drastically with the technological development. Manufacturers look for ways to feed 7.5 billion people more efficiently. And the demand is growing every year. Now technology is not only used for packaging, but for producing and processing food as well.
Introducing robots into the process of manufacturing brings multiple benefits to the producer. Not only do they make it more cost-efficient, but they also reduce safety problems.
Surprisingly enough, you can print food on the 3D printer now! This technology is applied to produce soft food for people who can’t chew. NASA decided to use it for fun as well and printed a pizza!
Drones now help farmers to keep an eye on crops from above. It helps them see the full picture and analyze the current state of the fields. For example, a drone can spot a starting disease and save the crop.
The newest eco trends push companies to become more environmentally aware. They use technologies to create safer packaging. The issue of food waste is also getting more and more relevant. Consumers want to know that nothing is wasted. Thanks to the new technologies, the excess food is now used more wisely.
Food Technology Research Topics
If you are looking for qualitative research topics about technology in the food industry, here is a list of ideas you don’t want to miss:
- What machines are used in the food industry?
- How do robots improve safety in butchery?
- Food industry & 3D printing
- 3D printed food – a solution to help people with swallowing disorder?
- Drones & precision agriculture
- How is robotics used to create eco-friendly food packaging?
- Is micro packaging our future?
- The development of edible cling film

- Technology & food waste: what are the solutions?
- Additives and preservatives & human gut microbiome
- The effect of citric acid on the orange juice: physicochemical level
- Vegetable oils in mass production: compare & contrast
- Time-temperature indicators & food industry
- Conventional vs. hydroponic farming
- Food safety: a policy issue in agriculture today
- How to improve the detection of parasites in food?
- What are the newest technologies in the baking industry?
- Eliminating byproducts in edible oils production
- Cold plasma & biofilms
- How good are the antioxidant peptides derived from plants?
- Electronic nose in food industry and agriculture
- The harm of polyphenols in food
Why does the life expectancy of people get higher and higher every year? One of the main aspects of it is the promotion of innovation in the medical area. For example, the development of equipment helps medical professionals to save many lives.
Thanks to information technology, the work is much more structured now in the medical area. The hospitals use tablets and the method of electronic medical records. It helps them to access and share the data more efficiently.
If talking about medical devices, emerging technologies save more lives than ever! For instance, operations done by robots are getting more and more popular. Don’t worry! Doctors are still in charge; they just control the robots from the other room. It allows operations to be less invasive and precise.
Moreover, science not only helps treat diseases but also prevent them! The medical research aims for the development of vaccines against deadly illnesses like malaria.
Some of the projects even sound more like crazy ideas from the future. But it is all happening right now! Scientists are working on the creation of artificial organs and the best robotic prosthetics.
All the technologies mentioned above are critical for successful healthcare management.
Medical Technology Research Topics
If you feel like saving lives is the purpose of your life, then technological research topics in the medical area are for you! These topics would also suit for your research paper:
- How effective are robotic surgeries?
- Smart inhalers as the new solution for asthma treatment
- Genetic counseling – a new way of preventing diseases?
- The benefits of the electronic medical records
- Erythrocytapheresis to treat sickle cell disease
- Defibrillator & cardiac resynchronization therapy
- Why do drug-eluting stents fail?
- Dissolvable brain sensors: an overview
- 3D printing for medical purposes
- How soon will we be able to create artificial organs?
- Wearable technologies & healthcare
- Precision medicine based on genetics
- Virtual reality devices for educational purposes in medical schools
- The development of telemedicine
- Clustered regularly interspaced short palindromic repeats as the way of treating diseases
- Nanotechnology & cancer treatment
- How safe is genome editing?
- The trends in electronic diagnostic tools development
- The future of the brain-machine interface
- How does wireless communication help medical professionals in hospitals?
In the past years, technologies have been drastically changing the pharmaceutical industry. Now, a lot of processes are optimized with the help of information technology. The ways of prescribing and distributing medications are much more efficient today. Moreover, the production of medicines itself has changed.
For instance, electronic prior authorization is now applied by more than half of the pharmacies. It makes the process of acquiring prior authorization much faster and easier.
The high price of medicines is the number one reason why patients stop using prescriptions. Real-time pharmacy benefit may be the solution! It is a system that gives another perspective for the prescribers. While working with individual patients, they will be able to consider multiple factors with the help of data provided.
The pharmaceutical industry also adopts some new technologies to compete on the international level. They apply advanced data analytics to optimize their work.
Companies try to reduce the cost and boost the effectiveness of the medicines. That is why they look into technologies that help avoid failures in the final clinical trials.
The constant research in the area of pharma is paying off. New specialty drugs and therapies arrive to treat chronic diseases. However, there are still enough opportunities for development.
Pharmaceutical Technologies Research Topics
Following the latest trends in the pharmaceutical area, this list offers a wide range of creative research topics on pharmaceutical technologies:
- Electronic prior authorization as a pharmacy technological trend
- The effectiveness of medication therapy management
- Medication therapy management & health information exchanges
- Electronic prescribing of controlled substances as a solution for drug abuse issue
- Do prescription drug monitoring programs really work?
- How can pharmacists help with meaningful use?
- NCPDP script standard for specialty pharmacies
- Pharmaceutical technologies & specialty medications
- What is the patient’s interest in the real-time pharmacy?
- The development of the vaccines for AIDS
- Phenotypic screening in pharmaceutical researches
- How does cloud ERP help pharmaceutical companies with analytics?
- Data security & pharmaceutical technologies
- An overview of the DNA-encoded library technology
- Pharmaceutical technologies: antibiotics vs. superbugs
- Personalized medicine: body-on-a-chip approach
- The future of cannabidiol medication in pain management
- How is cloud technology beneficial for small pharmaceutical companies?
- A new perspective on treatment: medicines from plants
- Anticancer nanomedicine: a pharmaceutical hope
🚈 Transportation Technologies
We used to be focused on making transportation more convenient. However, nowadays, the focus is slowly switching to ecological issues.
It doesn’t mean that vehicles can’t be comfortable at the same time. That is why the development of electric and self-driving cars is on the peak.
Transportation technologies also address the issues of safety and traffic jams. There are quite many solutions suggested. However, it would be hard for big cities to switch to the other systems fast.
One of the solutions is using shared vehicle phone applications. It allows reducing the number of private cars on the roads. On the other hand, if more people start preferring private vehicles, it may cause even more traffic issues.

The most innovative cities even start looking for more eco-friendly solutions for public transport. Buses are being replaced by electric ones. At the same time, the latest trend is using private electric vehicles such as scooters and bikes.
So that people use public transport more, it should be more accessible and comfortable. That is why the payment systems are also being updated. Now, all you would need is to download an app and buy a ticket in one click!
Transportation Technologies Research Topics
Here you can find the best information technology research topics related to transportation technologies:
- How safe are self-driving cars?
- Electric vs. hybrid cars: compare & contrast
- How to save your smart car from being hijacked?
- How do next-generation GPS devices adjust the route for traffic?
- Transportation technologies: personal transportation pods
- High-speed rail networks in Japan
- Cell phones during driving: threats and solutions
- Transportation: electric cars effects
- Teleportation: physics of the impossible
- How soon we will see Elon Musk’s Hyperloop?
- Gyroscopes as a solution for convenient public transportation
- Electric trucks: the effect on logistics
- Why were electric scooters banned in some cities in 2018?
- Carbon fiber as an optional material for unit load devices
- What are the benefits of the advanced transportation management systems?
- How to make solar roadways more cost-effective?
- How is blockchain applied in the transportation industry
- Transportation technologies: an overview of the freight check-in
- How do delivery companies use artificial intelligence?
- Water-fueled cars: the technology of future or fantasy?
- What can monitoring systems be used to manage curb space?
- Inclusivity and accessibility in public transport: an overview
- The development of the mobility-as-a-service
All in all, this article is a compilation of the 204 most interesting research topics on technology and computer science. It is a perfect source of inspiration for anyone who is interested in doing research in this area.
We have divided the topics by specific areas, which makes it easier for you to find your favorite one. There are 20 topics in each category, along with a short explanation of the most recent trends in the area.
You can choose one topic from artificial intelligence research topics and start working on it right away! There is also a wide selection of questions on biotechnology and engineering that are waiting to be answered.
Since media and communications are present in our everyday life and develop very fast, you should look into this area. But if you want to make a real change, you can’t miss on researching medical and pharmaceutical, food and energy, and transportation areas.
Of course, you are welcome to customize the topic you choose! The more creativity, the better! Maybe your research has the power to change something! Good luck, and have fun!
This might be interesting for you:
- 280 Good Nursing Research Topics & Questions
- 226 Research Topics on Criminal Justice & Criminology
- 178 Best Research Titles about Cookery & Food
- 497 Interesting History Topics to Research
- 180 Best Education Research Topics & Ideas
- 110+ Micro- & Macroeconomics Research Topics
- 417 Business Research Topics for ABM Students
- 190+ Research Topics on Psychology & Communication
- 512 Research Topics on HumSS
- 281 Best Health & Medical Research Topics
- 501 Research Questions & Titles about Science
- A List of Research Topics for Students. Unique and Interesting
- Good Research Topics, Titles and Ideas for Your Paper
- Databases for Research & Education: Gale
- The Complete Beginners’ Guide to Artificial Intelligence: Forbes
- 8 Best Topics for Research and Thesis in Artificial Intelligence: GeeksForGeeks
- Technology Is Changing Transportation, and Cities Should Adapt: Harvard Business Review
- Five Technology Trends: Changing Pharmacy Practice Today and Tomorrow (Pharmacy Times)
- Recent papers in Technology: Academia
- Research: Michigan Tech
- What 126 studies say about education technology: MIT News
- Top 5 Topics in Information Technology: King University Online
- Research in Technology Education-Some Areas of Need: Virginia Tech
- Undergraduate Research Topics: Department of Computer Science, Princeton University
- Student topics: QUT Science and Engineering
- Developing research questions: Monash University
- Biotechnology: Definition, Examples, & Applications (Britannica)
- 12 Emerging Technologies that May Help Power the Future: Georgia Tech
- Medical Laboratory Science Student Research Projects: Rush University
- Clinical Laboratory Science: Choosing a Research Topic (Library Resource Guide for FGCU Clinical Lab Science students)
- Share to Facebook
- Share to Twitter
- Share to LinkedIn
- Share to email
Thank you very for the best topics of research across all science and art projects. The best thing that I am interested to is computer forensics and security specifically for IT students.

Thanks for stopping by!
Hello, glad to hear from you!
Computer science focuses on creating programs and applications, while information technology focuses on using computer systems and networks. What computer science jobs are there. It includes software developers, web developers, software engineers, and data scientists.
Recommended for You

521 Research Questions & Titles about Science
Do you enjoy revealing the mysteries of nature? There are as many secrets in space as there are deep in the ocean. You may be the one who solves the next puzzle! Natural science focuses on our environment. We try to understand how and why everything around us works. Living...

507 Interesting History Topics to Research
Did you know that time travel is possible? Researching historical topics allows us to do that. It not only reveals the origins of our existence. It also tells us who we are, and even enables us to glimpse into the future! The process of investigating a topic in history is...
![research papers on computer technology 150 Argumentative Research Paper Topics [2023 Upd.]](https://custom-writing.org/blog/wp-content/uploads/2021/01/close-up-magnifier-glass-yellow-background-284x153.jpg)
150 Argumentative Research Paper Topics [2023 Upd.]
Argumentative research paper topics are a lot easier to find than to come up with. We always try to make your life easier. That’s why you should feel free to check out this list of the hottest and most controversial argumentative essay topics for 2023. In the article prepared by...
![research papers on computer technology A List of 580 Interesting Research Topics [2023 Edition]](https://custom-writing.org/blog/wp-content/uploads/2021/01/Fotolia_141866189_S-284x153.jpg)
A List of 580 Interesting Research Topics [2023 Edition]
In school and college, you will be required to write research papers. Yes — papers in the plural. And that’s the first reason you may want to turn to Custom Writing and seek help with research projects. When assigned a paper, the very first undertaking is to choose from a...

147 Social Studies Topics for Your Research Project
Social studies is an integrated research field. It includes a range of topics on social science and humanities, such as history, culture, geography, sociology, education, etc. A social studies essay might be assigned to any middle school, high school, or college student. It might seem like a daunting task, but...

Top 100 Research Topics & Titles about Food & TVL
When you look for a good research paper topic, you can easily become the severest critic of any proposed idea. Some topics do not interest you at the very least, while others might shock your teachers. Where is the golden mean? Check out this list of top 100 research paper...
- Search Input Search Submit
- Media Center
- SIGCSE Top 10 Paper Awards
Top Ten Computer Science Education Research Papers of the Last 50 Years Recognized
At 50th anniversary sigcse symposium, leading computer science education group highlights research that has shaped the field.
New York, NY, March 2, 2019 – As a capstone to its 50th annual SIGCSE Technical Symposium , leaders of the Association for Computing Machinery (ACM) Special Interest Group on Computer Science Education (SIGCSE) are celebrating the ideas that have shaped the field by recognizing a select group of publications with a “Top Ten Symposium Papers of All Time Award.” The top ten papers were chosen from among the best papers that were presented at the SIGCSE Technical Symposium over the last 49 years.
As part of the Top Ten announcement today in Minneapolis, the coauthors of each top paper will receive a plaque, free conference registration for one co-author to accept the award and up to a total of $2,000 that can be used toward travel for all authors of the top ranked paper.
“In 1969, the year of our first SIGCSE symposium, computing education was a niche specialty” explains SIGCSE Board Chair Amber Settle of DePaul University, of Chicago, USA. “Today, it is an essential skill students need to prepare for the workforce. Computing has become one of the most popular majors in higher education, and more and more students are being introduced to computing in K-12 settings. The Top Ten Symposium Papers of All Time Award will emphasize the outstanding research that underpins and informs how students of all ages learn computing. We also believe that highlighting excellent research will inspire others to enter the computing education field and make their own contributions.”
The Top Ten Symposium Papers are:
1. “ Identifying student misconceptions of programming ” (2010) Lisa C. Kaczmarczyk, Elizabeth R. Petrick, University of California, San Diego; Philip East, University of Northern Iowa; Geoffrey L. Herman, University of Illinois at Urbana-Champaign Computing educators are often baffled by the misconceptions that their CS1 students hold. We need to understand these misconceptions more clearly in order to help students form correct conceptions. This paper describes one stage in the development of a concept inventory for Computing Fundamentals: investigation of student misconceptions in a series of core CS1 topics previously identified as both important and difficult. Formal interviews with students revealed four distinct themes, each containing many interesting misconceptions.
2. “ Improving the CS1 experience with pair programming ” (2003) Nachiappan Nagappan, Laurie Williams, Miriam Ferzli, Eric Wiebe, Kai Yang, Carol Miller, Suzanne Balik, North Carolina State University Pair programming is a practice in which two programmers work collaboratively at one computer, on the same design, algorithm, or code. Prior research indicates that pair programmers produce higher quality code in essentially half the time taken by solo programmers. The authors organized an experiment to assess the efficacy of pair programming in an introductory Computer Science course. Their results indicate that pair programming creates a laboratory environment conducive to more advanced, active learning than traditional labs; students and lab instructors report labs to be more productive and less frustrating.
3. “ Undergraduate women in computer science: experience, motivation and culture ” (1997) Allan Fisher, Jane Margolis, Faye Miller, Carnegie Mellon University During a year-long study, the authors examined the experiences of undergraduate women studying computer science at Carnegie Mellon University, with a specific eye toward understanding the influences and processes whereby they attach themselves to or detach themselves from the field. This report, midway through the two-year project, recaps the goals and methods of the study, reports on their progress and preliminary conclusions, and sketches their plans for the final year and the future beyond this particular project.
4. “ A Multi-institutional Study of Peer Instruction in Introductory Computing ” (2016) Leo Porter, Beth Simon, University of California, San Diego; Dennis Bouvier, Southern Illinois University; Quintin Cutts, University of Glasgow; Scott Grissom, Grand Valley State University; Cynthia Lee, Stanford University; Robert McCartney, University of Connecticut; Daniel Zingaro, University of Toronto Peer Instruction (PI) is a student-centric pedagogy in which students move from the role of passive listeners to active participants in the classroom. This paper adds to this body of knowledge by examining outcomes from seven introductory programming instructors: three novices to PI and four with a range of PI experience. Through common measurements of student perceptions, the authors provide evidence that introductory computing instructors can successfully implement PI in their classrooms.
5. " The introductory programming course in computer science: ten principles " (1978) G. Michael Schneider, University of Minnesota Schneider describes the crucial goals of any introductory programming course while leaving to the reader the design of a specific course to meet these goals. This paper presents ten essential objectives of an initial programming course in Computer Science, regardless of who is teaching or where it is being taught. Schneider attempts to provide an in-depth, philosophical framework for the course called CS1—Computer Programming 1—as described by the ACM Curriculum Committee on Computer Science.
6. “ Constructivism in computer science education ” (1998) Mordechai Ben-Ari, Weizmann Institute of Science Constructivism is a theory of learning which claims that students construct knowledge rather than merely receive and store knowledge transmitted by the teacher. Constructivism has been extremely influential in science and mathematics education, but not in computer science education (CSE). This paper surveys constructivism in the context of CSE, and shows how the theory can supply a theoretical basis for debating issues and evaluating proposals.
7. “ Using software testing to move students from trial-and-error to reflection-in-action ” (2004) Stephen H. Edwards, Virginia Tech Introductory computer science students have relied on a trial and error approach to fixing errors and debugging for too long. Moving to a reflection in action strategy can help students become more successful. Traditional programming assignments are usually assessed in a way that ignores the skills needed for reflection in action, but software testing promotes the hypothesis-forming and experimental validation that are central to this mode of learning. By changing the way assignments are assessed--where students are responsible for demonstrating correctness through testing, and then assessed on how well they achieve this goal--it is possible to reinforce desired skills. Automated feedback can also play a valuable role in encouraging students while also showing them where they can improve.
8. “ What should we teach in an introductory programming course ” (1974) David Gries, Cornell University Gries argues that an introductory course (and its successor) in programming should be concerned with three aspects of programming: 1. How to solve problems, 2. How to describe an algorithmic solution to a problem, and 3. How to verify that an algorithm is correct. In this paper he discusses mainly the first two aspects. He notes that the third is just as important, but if the first two are carried out in a systematic fashion, the third is much easier than commonly supposed.
9. “ Contributing to success in an introductory computer science course: a study of twelve factors ” (2001) Brenda Cantwell Wilson, Murray State University; Sharon Shrock, Southern Illinois University This study was conducted to determine factors that promote success in an introductory college computer science course. The model included twelve possible predictive factors including math background, attribution for success/failure (luck, effort, difficulty of task, and ability), domain specific self-efficacy, encouragement, comfort level in the course, work style preference, previous programming experience, previous non-programming computer experience, and gender. Subjects included 105 students enrolled in a CS1 introductory computer science course at a midwestern university. The study revealed three predictive factors in the following order of importance: comfort level, math, and attribution to luck for success/failure.
10. “ Teaching objects-first in introductory computer science ” (2003) Stephen Cooper, Saint Joseph's University; Wanda Dann, Ithaca College; Randy Pausch Carnegie Mellon University An objects-first strategy for teaching introductory computer science courses is receiving increased attention from CS educators. In this paper, the authors discuss the challenge of the objects-first strategy and present a new approach that attempts to meet this challenge. The approach is centered on the visualization of objects and their behaviors using a 3D animation environment. Statistical data as well as informal observations are summarized to show evidence of student performance as a result of this approach. A comparison is made of the pedagogical aspects of this new approach with that of other relevant work.
Annual Best Paper Award Announced Today SIGCSE officers also announced the inauguration of an annual SIGCSE Test of Time Award. The first award will be presented at the 2020 SIGCSE Symposium and recognize research publications that have had wide-ranging impact on the field.
About SIGCSE
The Special Interest Group on Computer Science Education of the Association for Computing Machinery (ACM SIGCSE) is a community of approximately 2,600 people who, in addition to their specialization within computing, have a strong interest in the quality of computing education. SIGCSE provides a forum for educators to discuss the problems concerned with the development, implementation, and/or evaluation of computing programs, curricula, and courses, as well as syllabi, laboratories, and other elements of teaching and pedagogy.
ACM, the Association for Computing Machinery , is the world's largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.
Contact: Adrienne Decker 585-475-4653 [email protected]
Printable PDF File

We apologize for the inconvenience...
To ensure we keep this website safe, please can you confirm you are a human by ticking the box below.
If you are unable to complete the above request please contact us using the below link, providing a screenshot of your experience.
https://ioppublishing.org/contacts/
Please solve this CAPTCHA to request unblock to the website
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
- View all journals
Computer science
Computer science is the study and development of the protocols required for automated processing and manipulation of data. This includes, for example, creating algorithms for efficiently searching large volumes of information or encrypting data so that it can be stored and transmitted securely.
News & Views | 22 February 2023
Towards quantum computers that are robust to errors
Latest research and reviews.
Research 04 March 2023 | Open Access
Hard optimization problems have soft edges
- Raffaele Marino
- & Scott Kirkpatrick
Molecular-evaluated and explainable drug repurposing for COVID-19 using ensemble knowledge graph embedding
- Md Kamrul Islam
- , Diego Amaya-Ramirez
- & Malika Smaïl-Tabbone
Research 03 March 2023 | Open Access
Feature augmentation based on information fusion rectification for few-shot image classification
- , Shengzhao Tian
- & Duanbing Chen
Research 02 March 2023 | Open Access
Multi-order graph attention network for water solubility prediction and interpretation
- , Hyunwoo Park
- & Youngdoo Son
Parameter-efficient fine-tuning of large-scale pre-trained language models
Training a deep neural network can be costly but training time is reduced when a pre-trained network can be adapted to different use cases. Ideally, only a small number of parameters needs to be changed in this process of fine-tuning, which can then be more easily distributed. In this Analysis, different methods of fine-tuning with only a small number of parameters are compared on a large set of natural language processing tasks.
- , Yujia Qin
- & Maosong Sun
Evidence of a predictive coding hierarchy in the human brain listening to speech
Current machine learning language algorithms make adjacent word-level predictions. In this work, Caucheteux et al. show that the human brain probably uses long-range and hierarchical predictions, taking into account up to eight possible words into the future.
- Charlotte Caucheteux
- , Alexandre Gramfort
- & Jean-Rémi King
News and Comment
News | 28 February 2023
Quick uptake of ChatGPT, and more — this week’s best science graphics
Three charts from the world of research, selected by Nature editors.
Decoding the business of brain–computer interfaces
Fifty years after the term brain–computer interface was coined, the neurotechnology is being pursued by an array of start-up companies using a variety of different technologies. But the path to clinical and commercial success remains uncertain.
Comments & Opinion | 24 February 2023
Why artificial intelligence needs to understand consequences
A machine with a grasp of cause and effect could learn more like a human, through imagination and regret.
- Neil Savage
Comments & Opinion | 22 February 2023
AI writing tools could hand scientists the ‘gift of time’
ChatGPT might not yet give us sparkling prose. But it can free scientists up to focus on more-stimulating writing tasks, says John Tregoning.
- John Tregoning
For quantum computers to fulfil their potential, they need to detect and correct errors in encoded information to reach sufficiently low error rates for reliable operation. For the first time, a device has been created in which encoded error rates improve as the system size is increased.
News | 22 February 2023
Google’s quantum computer hits key milestone by reducing errors
Researchers demonstrate for the first time that using more qubits can lower error rate of quantum calculations.
- Davide Castelvecchi
Quick links
- Explore articles by subject
- Guide to authors
- Editorial policies

Suggestions or feedback?
MIT News | Massachusetts Institute of Technology
- Machine learning
- Social justice
- Black holes
- Classes and programs
Departments
- Aeronautics and Astronautics
- Brain and Cognitive Sciences
- Architecture
- Political Science
- Mechanical Engineering
Centers, Labs, & Programs
- Abdul Latif Jameel Poverty Action Lab (J-PAL)
- Picower Institute for Learning and Memory
- Lincoln Laboratory
- School of Architecture + Planning
- School of Engineering
- School of Humanities, Arts, and Social Sciences
- Sloan School of Management
- School of Science
- MIT Schwarzman College of Computing
Computer science and technology
Download RSS feed: News Articles / In the Media

QuARC 2023 explores the leading edge in quantum information and science
The second annual student-industry conference was held in-person for the first time.
March 3, 2023
Read full story →

Large language models are biased. Can logic help save them?
MIT researchers trained logic-aware language models to reduce harmful stereotypes like gender and racial biases.

Robot armies duke it out in Battlecode’s epic on-screen battles
The long-running programming competition encourages skills and friendships that last a lifetime.

Integrating humans with AI in structural design
A process that seeks feedback from human specialists proves more effective at optimization than automated systems working alone.
March 2, 2023

Hari Balakrishnan awarded Marconi Prize
The prize is the top honor within the field of communications technology.
February 28, 2023

Report: CHIPS Act just the first step in addressing threats to US leadership in advanced computing
The Advanced Computing Users Survey, sampling sentiments from 120 top-tier universities, national labs, federal agencies, and private firms, finds the decline in America’s advanced computing lead spans many areas.

Comedy meets mathematics in a new opera at MIT
Senior music lecturer Elena Ruehr turns Charles Babbage and Ada Lovelace, groundbreaking thinkers of modern computing, into crime fighters.
February 23, 2023

Student-led conference charts the future of micro- and nanoscale research, reinforces scientific community
19th Microsystems Annual Research Conference reveals the next era of microsystems technologies, along with skiing and a dance party.
February 22, 2023

A new way for quantum computing systems to keep their cool
A wireless technique enables a super-cold quantum computer to send and receive data without generating too much error-causing heat.
February 21, 2023

Engineers discover a new way to control atomic nuclei as “qubits”
Using lasers, researchers can directly control a property of nuclei called spin, that can encode quantum information.
February 15, 2023

“MIT Illuminations,” a colorful installation and introduction to creative computation, is now open in Kendall Square
Located in the new MIT Welcome Center in Building E38, the installation expresses the dynamic, vibrant culture of MIT through the medium of programmable light.
February 14, 2023

Scientists boost quantum signals while reducing noise
“Squeezing” noise over a broad frequency bandwidth in a quantum system could lead to faster and more accurate quantum measurements.
February 9, 2023

Solving a machine-learning mystery
A new study shows how large language models like GPT-3 can learn a new task from just a few examples, without the need for any new training data.
February 7, 2023

Automating the math for decision-making under uncertainty
A new tool brings the benefits of AI programming to a much broader class of problems.
February 6, 2023

Toward new, computationally designed cybersteels
With a grant from the Office of Naval Research, MIT researchers aim to design novel high-performance steels, with potential applications including printed aircraft components and ship hulls.
February 3, 2023
Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA
- Map (opens in new window)
- Events (opens in new window)
- People (opens in new window)
- Careers (opens in new window)
- Accessibility
- Social Media Hub
- MIT on Twitter
- MIT on Facebook
- MIT on YouTube
- MIT on Instagram
Help | Advanced Search
Computer Science (since January 1993)
For a specific paper , enter the identifier into the top right search box.
- new (most recent mailing, with abstracts)
- recent (last 5 mailings)
- current month's cs listings
- specific year/month: 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 all months 01 (Jan) 02 (Feb) 03 (Mar) 04 (Apr) 05 (May) 06 (Jun) 07 (Jul) 08 (Aug) 09 (Sep) 10 (Oct) 11 (Nov) 12 (Dec)
- Catch-up: Changes since: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 01 (Jan) 02 (Feb) 03 (Mar) 04 (Apr) 05 (May) 06 (Jun) 07 (Jul) 08 (Aug) 09 (Sep) 10 (Oct) 11 (Nov) 12 (Dec) 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 , view results without with abstracts
- Search within the cs archive
- Article statistics by year: 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993
Categories within Computer Science
- cs.AI - Artificial Intelligence ( new , recent , current month ) Covers all areas of AI except Vision, Robotics, Machine Learning, Multiagent Systems, and Computation and Language (Natural Language Processing), which have separate subject areas. In particular, includes Expert Systems, Theorem Proving (although this may overlap with Logic in Computer Science), Knowledge Representation, Planning, and Uncertainty in AI. Roughly includes material in ACM Subject Classes I.2.0, I.2.1, I.2.3, I.2.4, I.2.8, and I.2.11.
- cs.CL - Computation and Language ( new , recent , current month ) Covers natural language processing. Roughly includes material in ACM Subject Class I.2.7. Note that work on artificial languages (programming languages, logics, formal systems) that does not explicitly address natural-language issues broadly construed (natural-language processing, computational linguistics, speech, text retrieval, etc.) is not appropriate for this area.
- cs.CC - Computational Complexity ( new , recent , current month ) Covers models of computation, complexity classes, structural complexity, complexity tradeoffs, upper and lower bounds. Roughly includes material in ACM Subject Classes F.1 (computation by abstract devices), F.2.3 (tradeoffs among complexity measures), and F.4.3 (formal languages), although some material in formal languages may be more appropriate for Logic in Computer Science. Some material in F.2.1 and F.2.2, may also be appropriate here, but is more likely to have Data Structures and Algorithms as the primary subject area.
- cs.CE - Computational Engineering, Finance, and Science ( new , recent , current month ) Covers applications of computer science to the mathematical modeling of complex systems in the fields of science, engineering, and finance. Papers here are interdisciplinary and applications-oriented, focusing on techniques and tools that enable challenging computational simulations to be performed, for which the use of supercomputers or distributed computing platforms is often required. Includes material in ACM Subject Classes J.2, J.3, and J.4 (economics).
- cs.CG - Computational Geometry ( new , recent , current month ) Roughly includes material in ACM Subject Classes I.3.5 and F.2.2.
- cs.GT - Computer Science and Game Theory ( new , recent , current month ) Covers all theoretical and applied aspects at the intersection of computer science and game theory, including work in mechanism design, learning in games (which may overlap with Learning), foundations of agent modeling in games (which may overlap with Multiagent systems), coordination, specification and formal methods for non-cooperative computational environments. The area also deals with applications of game theory to areas such as electronic commerce.
- cs.CV - Computer Vision and Pattern Recognition ( new , recent , current month ) Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.
- cs.CY - Computers and Society ( new , recent , current month ) Covers impact of computers on society, computer ethics, information technology and public policy, legal aspects of computing, computers and education. Roughly includes material in ACM Subject Classes K.0, K.2, K.3, K.4, K.5, and K.7.
- cs.CR - Cryptography and Security ( new , recent , current month ) Covers all areas of cryptography and security including authentication, public key cryptosytems, proof-carrying code, etc. Roughly includes material in ACM Subject Classes D.4.6 and E.3.
- cs.DS - Data Structures and Algorithms ( new , recent , current month ) Covers data structures and analysis of algorithms. Roughly includes material in ACM Subject Classes E.1, E.2, F.2.1, and F.2.2.
- cs.DB - Databases ( new , recent , current month ) Covers database management, datamining, and data processing. Roughly includes material in ACM Subject Classes E.2, E.5, H.0, H.2, and J.1.
- cs.DL - Digital Libraries ( new , recent , current month ) Covers all aspects of the digital library design and document and text creation. Note that there will be some overlap with Information Retrieval (which is a separate subject area). Roughly includes material in ACM Subject Classes H.3.5, H.3.6, H.3.7, I.7.
- cs.DM - Discrete Mathematics ( new , recent , current month ) Covers combinatorics, graph theory, applications of probability. Roughly includes material in ACM Subject Classes G.2 and G.3.
- cs.DC - Distributed, Parallel, and Cluster Computing ( new , recent , current month ) Covers fault-tolerance, distributed algorithms, stabilility, parallel computation, and cluster computing. Roughly includes material in ACM Subject Classes C.1.2, C.1.4, C.2.4, D.1.3, D.4.5, D.4.7, E.1.
- cs.ET - Emerging Technologies ( new , recent , current month ) Covers approaches to information processing (computing, communication, sensing) and bio-chemical analysis based on alternatives to silicon CMOS-based technologies, such as nanoscale electronic, photonic, spin-based, superconducting, mechanical, bio-chemical and quantum technologies (this list is not exclusive). Topics of interest include (1) building blocks for emerging technologies, their scalability and adoption in larger systems, including integration with traditional technologies, (2) modeling, design and optimization of novel devices and systems, (3) models of computation, algorithm design and programming for emerging technologies.
- cs.FL - Formal Languages and Automata Theory ( new , recent , current month ) Covers automata theory, formal language theory, grammars, and combinatorics on words. This roughly corresponds to ACM Subject Classes F.1.1, and F.4.3. Papers dealing with computational complexity should go to cs.CC; papers dealing with logic should go to cs.LO.
- cs.GL - General Literature ( new , recent , current month ) Covers introductory material, survey material, predictions of future trends, biographies, and miscellaneous computer-science related material. Roughly includes all of ACM Subject Class A, except it does not include conference proceedings (which will be listed in the appropriate subject area).
- cs.GR - Graphics ( new , recent , current month ) Covers all aspects of computer graphics. Roughly includes material in all of ACM Subject Class I.3, except that I.3.5 is is likely to have Computational Geometry as the primary subject area.
- cs.AR - Hardware Architecture ( new , recent , current month ) Covers systems organization and hardware architecture. Roughly includes material in ACM Subject Classes C.0, C.1, and C.5.
- cs.HC - Human-Computer Interaction ( new , recent , current month ) Covers human factors, user interfaces, and collaborative computing. Roughly includes material in ACM Subject Classes H.1.2 and all of H.5, except for H.5.1, which is more likely to have Multimedia as the primary subject area.
- cs.IR - Information Retrieval ( new , recent , current month ) Covers indexing, dictionaries, retrieval, content and analysis. Roughly includes material in ACM Subject Classes H.3.0, H.3.1, H.3.2, H.3.3, and H.3.4.
- cs.IT - Information Theory ( new , recent , current month ) Covers theoretical and experimental aspects of information theory and coding. Includes material in ACM Subject Class E.4 and intersects with H.1.1.
- cs.LO - Logic in Computer Science ( new , recent , current month ) Covers all aspects of logic in computer science, including finite model theory, logics of programs, modal logic, and program verification. Programming language semantics should have Programming Languages as the primary subject area. Roughly includes material in ACM Subject Classes D.2.4, F.3.1, F.4.0, F.4.1, and F.4.2; some material in F.4.3 (formal languages) may also be appropriate here, although Computational Complexity is typically the more appropriate subject area.
- cs.LG - Machine Learning ( new , recent , current month ) Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.
- cs.MS - Mathematical Software ( new , recent , current month ) Roughly includes material in ACM Subject Class G.4.
- cs.MA - Multiagent Systems ( new , recent , current month ) Covers multiagent systems, distributed artificial intelligence, intelligent agents, coordinated interactions. and practical applications. Roughly covers ACM Subject Class I.2.11.
- cs.MM - Multimedia ( new , recent , current month ) Roughly includes material in ACM Subject Class H.5.1.
- cs.NI - Networking and Internet Architecture ( new , recent , current month ) Covers all aspects of computer communication networks, including network architecture and design, network protocols, and internetwork standards (like TCP/IP). Also includes topics, such as web caching, that are directly relevant to Internet architecture and performance. Roughly includes all of ACM Subject Class C.2 except C.2.4, which is more likely to have Distributed, Parallel, and Cluster Computing as the primary subject area.
- cs.NE - Neural and Evolutionary Computing ( new , recent , current month ) Covers neural networks, connectionism, genetic algorithms, artificial life, adaptive behavior. Roughly includes some material in ACM Subject Class C.1.3, I.2.6, I.5.
- cs.NA - Numerical Analysis ( new , recent , current month ) cs.NA is an alias for math.NA. Roughly includes material in ACM Subject Class G.1.
- cs.OS - Operating Systems ( new , recent , current month ) Roughly includes material in ACM Subject Classes D.4.1, D.4.2., D.4.3, D.4.4, D.4.5, D.4.7, and D.4.9.
- cs.OH - Other Computer Science ( new , recent , current month ) This is the classification to use for documents that do not fit anywhere else.
- cs.PF - Performance ( new , recent , current month ) Covers performance measurement and evaluation, queueing, and simulation. Roughly includes material in ACM Subject Classes D.4.8 and K.6.2.
- cs.PL - Programming Languages ( new , recent , current month ) Covers programming language semantics, language features, programming approaches (such as object-oriented programming, functional programming, logic programming). Also includes material on compilers oriented towards programming languages; other material on compilers may be more appropriate in Architecture (AR). Roughly includes material in ACM Subject Classes D.1 and D.3.
- cs.RO - Robotics ( new , recent , current month ) Roughly includes material in ACM Subject Class I.2.9.
- cs.SI - Social and Information Networks ( new , recent , current month ) Covers the design, analysis, and modeling of social and information networks, including their applications for on-line information access, communication, and interaction, and their roles as datasets in the exploration of questions in these and other domains, including connections to the social and biological sciences. Analysis and modeling of such networks includes topics in ACM Subject classes F.2, G.2, G.3, H.2, and I.2; applications in computing include topics in H.3, H.4, and H.5; and applications at the interface of computing and other disciplines include topics in J.1--J.7. Papers on computer communication systems and network protocols (e.g. TCP/IP) are generally a closer fit to the Networking and Internet Architecture (cs.NI) category.
- cs.SE - Software Engineering ( new , recent , current month ) Covers design tools, software metrics, testing and debugging, programming environments, etc. Roughly includes material in all of ACM Subject Classes D.2, except that D.2.4 (program verification) should probably have Logics in Computer Science as the primary subject area.
- cs.SD - Sound ( new , recent , current month ) Covers all aspects of computing with sound, and sound as an information channel. Includes models of sound, analysis and synthesis, audio user interfaces, sonification of data, computer music, and sound signal processing. Includes ACM Subject Class H.5.5, and intersects with H.1.2, H.5.1, H.5.2, I.2.7, I.5.4, I.6.3, J.5, K.4.2.
- cs.SC - Symbolic Computation ( new , recent , current month ) Roughly includes material in ACM Subject Class I.1.
- cs.SY - Systems and Control ( new , recent , current month ) cs.SY is an alias for eess.SY. This section includes theoretical and experimental research covering all facets of automatic control systems. The section is focused on methods of control system analysis and design using tools of modeling, simulation and optimization. Specific areas of research include nonlinear, distributed, adaptive, stochastic and robust control in addition to hybrid and discrete event systems. Application areas include automotive and aerospace control systems, network control, biological systems, multiagent and cooperative control, robotics, reinforcement learning, sensor networks, control of cyber-physical and energy-related systems, and control of computing systems.

IMAGES
VIDEO
COMMENTS
Writing a research paper is a bit more difficult that a standard high school essay. You need to site sources, use academic data and show scientific examples. Before beginning, you’ll need guidelines for how to write a research paper.
To make an acknowledgement in a research paper, a writer should express thanks by using the full or professional names of the people being thanked and should specify exactly how the people being acknowledged helped.
At one time, it would be unheard of, or even impossible, for a household to have a computer. A few decades later, the computer was a high ticket item. Today, the average household owns more than one computer.
Abstract: This paper discusses computer technology as a teaching tool for e-learning to college students in higher education.
It is an international research journal sponsored by Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS), and China Computer
The research techniques for this paper are.
Computer-based technology has infiltrated many aspects of life and ... that decades of research studies show are linked to student learning”
Computer models help researchers study systems of extreme complexity, predict the behavior of natural phenomena, and examine the effects of human interventions
Top 10 Technology Topics. The difference between VR and AR. Is genetic engineering ethical? Can digital books replace print ones?
This paper describes one stage in the development of a concept inventory for Computing Fundamentals: investigation of student misconceptions in
People's daily life, work and study are inseparable from the assistance of computer software technology, and the social progress, the development of science and
pursued by an array of start-up companies using a variety of different technologies.
Displaying 1 - 15 of 1570 news articles related to this topic. Show: News Articles. In
Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness