The Impact of Tablet PC Presentation

 on Student Learning in Introductory Macroeconomics

 

 

 

Phillip M. Holleran

Department of Economics

Radford University

 

 

 

 

P.O. Box 6952

Radford University

Radford, VA  24142

 

Phone: (540) 831-6778

Fax: (540)

 

pholleran@radford.edu

 

 

 

 

 

Keywords:       Tablet PC

                        Student learning

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The Impact of Tablet PC Presentation

 on Student Learning in Introductory Macroeconomics

 

 

 

Phillip M. Holleran

Department of Economics

Radford University

 

 

 

 

Abstract. Recent years have witnessed a surge in the use of Tablet PCs and pen-based technology as a classroom alternative to traditional methods of presenting material. Several previous studies report survey results indicating that the Tablet PC improves lecture presentations and enhances student understanding of presented material. This paper uses a natural experiment to test the effectiveness of Tablet PC presentation relative to marker-and-whiteboard presentation. Comparisons of student performance do not support the hypothesis that Tablet PC presentation increases student learning. Regression analysis of a model of student learning, including controls for student characteristics, likewise fails to support the hypothesis.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The Impact of Tablet PC Presentation

 on Student Learning in Introductory Macroeconomics

 

 

 

            Recent years have witnessed a surge in the use of Tablet PCs and pen-based technology as a classroom alternative to traditional methods of presenting material, such as PowerPoint slides or “chalk-and-talk” with a blackboard or whiteboard. Several studies report survey results indicating that, according to students, presentation of in-class material with the Tablet PC increases the clarity and organization of lecture presentations; improves instructor response to student questions and student-instructor interactions; and enhances student attentiveness in class and understanding of presented material (Anderson et al 2004; Hulls 2005; Mock 2004; Wise et al. 2006). Student surveys thus suggest the hypothesis that use of the Tablet PC increases student learning compared to more-traditional methods of classroom presentation.

            To date, however, surveys of student perceptions have not been accompanied by data on student learning. This paper tests the hypothesis that use of the Tablet PC increases student learning by reporting the results of a natural experiment on the relative effectiveness of methods of presenting class material.  During the Spring 2006 semester, I taught three sections of a Principles of Macroeconomics course. In two of those sections, I used a Tablet PC with digital ink to wirelessly project classroom material onto a large screen. A third section of this course, however, met in a classroom that was not wireless-enabled, and thus I resorted to the traditional technology of using a marker to write notes on a whiteboard.

            Comparison of mean final exam scores and individual student course averages among the sections, as well as pre- and post-test comparisons, yield no support for the hypothesis that student learning is greater with the Tablet PC than with the marker-and-whiteboard presentation technology. Regression analysis of a model of student learning, including controls for student characteristics, likewise fails to support the hypothesis. It may be, then, that the technology as used really has no positive effect. Alternatively, the results of the analysis may indicate that the presentation reduces students’ cost of in-class learning, causing students to reduce expenditure of other time and effort in learning the material, leaving the total amount of learning unchanged.

            It is important to be clear what this experiment does and does not test. First, the data are from one instructor’s use of the Tablet PC and thus reflect any idiosyncrasies of this instructor’s teaching style, both with and without the Tablet. Thus this is a test not of the Tablet per se but of the effectiveness of this instructor’s use of the Tablet relative to the effectiveness of this instructor’s use of marker and whiteboards. Second, this is a test of presentation with the Tablet PC by itself; the experiment did not incorporate other technologies (such as interactive technologies enabling direct exchange of content between students and instructor and between students and other students) that can be used in conjunction with the Tablet PC. Third, this is a test of only the classroom presentation technology of the Tablet; other possible uses of the Tablet, such as posting notes to a course website or electronic submission and return of student papers, are not considered here, though they may well be very real advantages of the Tablet technology.

 

I. The Tablet PC as Instructional Technology

            The tablet PC is a portable PC with a pen or stylus that can be used to write and draw on the computer’s screen with digital ink. The digital ink can be easily moved or erased. The words and images on the tablet screen can, with the appropriate software, be electronically projected onto a screen at the front of the classroom, as with PowerPoint slides. I used a Gateway M275 notebook Tablet PC with a screen that can swivel 180 degrees, fold down, and lock, so that the screen can be written on as a slate and carried around the classroom. The digital-ink pen is used to draw or write on the screen; when tapped on the screen, the pen also serves as a mouse. To begin each class, I connected wirelessly to the university network and then used a program called AirProjector to connect to the projector mounted in the classroom. Everything on the instructor’s computer screen is displayed for students to view.

            Schwager et al (2005) and Mock (2004) describe numerous advantages of using the Tablet PC in the classroom. One of the most important advantages is that material prepared in advance can be displayed and annotated during class. As a result, classroom presentations can be both flexible and dynamic, permitting responses to students’ questions and comments during class. The ability to write “on the fly” increases student-instructor interaction. Wise et al. (2006, p. 1) argue that use of the Tablet PC “facilitates bi-directional sharing of information, moving students beyond merely observing presentations to interacting with the material, the teacher, and each other.” A second type of advantage is in the actual display of information. For example, material that would have been erased on a blackboard can be re-displayed from a previous “page.” Similarly, it is easy to switch to different colors and styles of pens for graphing, highlighting, and marking displayed material. Since the instructor is writing on a Tablet PC but the material is displayed on a screen at the front of the classroom, the instructor does not have to worry about blocking students’ view of displayed material. Talley (2005, p. 17) reports that these elements of Tablet display improve lecture accuracy and efficiency. A third classroom advantage is that use of the Tablet PC permits easy switching to other applications, such as a spreadsheet or web browser.

 

 

II. Previous Studies of the Effects of Using Tablet PC Technology

            Most of the work investigating the academic use of Tablet PCs has been done for engineering and computer science courses. These studies typically report survey results of student perceptions of Tablet PC presentation. Mock (2004) reports that 10 of 16 responding students in a Java programming course, strongly agreed that they preferred Tablet presentation to traditional methods, while 4 somewhat agreed and 2 neither agreed nor disagreed. Anderson et al. (2004) describe survey results from 25 computer science courses at 3 universities indicating that students felt that the Tablet PC encouraged them to pay more attention in class and helped them understand course material better than other presentation methods. Hulls (2005) reports that students in a computer engineering course stated that Tablet PC presentations were clearer and better organized than PowerPoint or overhead transparency presentation; students also thought that the Tablet PC improved instructor response to student questions during class. Wise et al. (2006) find that over 62% of respondents to student surveys in electrical and mechanical engineering courses indicated that they paid more attention to presentations with a Tablet PC, while 65% of respondents indicated that material was easier to understand when presented with a Tablet PC.

            The most detailed survey results for Economics courses are reported by Talley (2005). He finds that 86% of students in an Introductory Macro course preferred the Tablet to the whiteboard, while more than 75% of students in Introductory Micro courses believe that the quality of instruction was improved with use of the Tablet PC. Dixon, Pannell, and Villinski (2006) report survey results from one lower-level and one upper-level course Economics courses They used pen-based computers in conjunction with Dyknow Vision to enhance student-instructor interaction in the classroom.[1] Their survey results show that in both classes approximately 60 percent of students felt that the software provided them with better notes (though about 20 percent reported that it did not, and about 20 percent were neutral). About 25 percent of students reported that they found the software distracting. Dixon, Pannell, and Villinski (2006, p. 55) infer from student comments that the technology may “address the needs of an important subset of students: those with some types of learning disabilities.” However, neither they nor Talley formally test the impact of the Tablet and pen-based technology on student learning.

            Unlike other studies that rely solely on student surveys, Koile and Singer (2006) tested the impact of a Tablet-PC system on student learning in an introductory computer science class. For the first five weeks of the course (through the first course exam), the instructor presented material with blackboard, overheads, and handouts. After the first exam, material was presented with a Tablet PC; software was introduced that supported student wireless submission of digital ink answers to in-class exercises and the instructor gave immediate feedback and led discussion on these exercises. Koile and Singer compared test scores for students from this recitation section with scores for students from other sections. They found that the students in this section comprised 44.4 percent of students in the top ten percent of the class in final grades, whereas students in this section comprised 35.7 percent of students scoring in the top ten percent of the first exam. Koile and Singer conclude that student learning was positively affected by the Tablet PC, and by the feedback mechanism in particular, but they caution that the small sample size (just 15 students) makes interpretation of these results preliminary.

            In short, although student surveys results suggest that a majority of students prefer Tablet PC presentation to traditional methods, there is little empirical evidence to demonstrate that the presentation method significantly improves student learning. No studies to date have reported the results of a test for the impact of Tablet PC presentation on student learning in Economics.

 

III. A Natural Experiment on the Relative Effectiveness of Tablet PCs

            A natural experiment presented itself in my Principles of Macroeconomics courses during the Spring 2006 semester. I had used the Tablet PC in all of my Summer 2005 and Fall 2005 courses – a total of 4 courses, including one section of Macro Principles. I prepared to use the Tablet PC with wireless projector in three sections of Principles of Macroeconomics for the Spring 2006 semester. On the first day of classes I discovered, however, that one of the sections met in a classroom that did not have Tablet-wireless projector capability; in that section, I reverted to presenting material with markers on a whiteboard.[2] In the other two sections I used the Tablet with wireless projector as planned throughout the semester. Each of the three sections met for 75 minutes twice a week, on Tuesdays and Thursdays.

            I used the Tablet PC in the classroom largely as a substitute for a whiteboard or blackboard. Before each class meeting, I prepared a detailed outline of the day’s lecture and/or discussion in Windows Journal. During class I “filled in” the outline by writing notes and drawing graphs on the Tablet PC for projection on a screen at the front of the classroom, just as I did when using a marker on whiteboard. I also used the Tablet to link to both the course website and to external websites; I was able to access and project those same websites in the non-Tablet section by means of the desktop PC permanently mounted in that classroom.

            The material presented in each of the three sections was nearly identical, with small differences in pacing due to variability in student comments and questions. The style of presentation – lecture interspersed with discussion – was the same in each section. Thus the only teaching difference among the three sections was that the Tablet PC technology was used to present material in two sections while the marker-on-whiteboard technology was used to present material in one section.

 

IV. Comparisons of Student Performance

            Student surveys reported in previous studies suggest the hypothesis that presentation of class material with a Tablet PC improves student learning compared to presentation of that material with marker and whiteboard. The surveys suggest a mechanism for this improvement: the Tablet PC promotes clearer and better-organized presentations; the flexibility of Tablet PC presentation improves instructor response to student questions and comments, and increases student-instructor interaction; and Tablet PC presentation focuses student attention on course material. If the hypothesis is correct, then we should observe higher student performance, ceteris paribus, in the sections using the Tablet PC than in the section using the whiteboard.

            Table I presents means and standard deviations for three measures of student performance by section, along with means and standard deviations of other student characteristics. One measure of student performance (and, one hopes, of student learning) is the “Course Average” -- the percentage earned of all possible points in the course. Course Average includes scores on weekly multiple-choice quizzes (the lowest of 11 quiz scores was dropped for each student); scores on out-of-class assignments (four empirical analyses and five short case studies); three 75-minute tests (each consisting of 60 points worth of multiple-choice questions and 40 points worth of short essay/analysis questions); and one comprehensive final exam (86 points worth of multiple-choice questions and 114 points worth of short essay/analysis questions). A second, narrower, measure of student performance is “Final Exam” -- the percentage of points earned on the comprehensive final exam. A third, narrower still, measure of student performance is the “Post-Test” score.  A Pre-Test consisting of fifteen multiple-choice questions was administered during the first class meeting of each section; “Post-Test” is the number of questions answered correctly on that same set of questions embedded in the final exam.

            The mean Course Average was significantly higher in the 12:30 Tablet section than in the 8:00 Tablet section (t = 1.75) and in the 11:00 whiteboard section (t = 1.76). The mean Final Exam score was significantly higher in the 12:30 Tablet section than in the 8:00 Tablet section (t = 2.05) but not significantly higher than in the 11:00 whiteboard section (t = 1.40). When the two Tablet sections are combined into one group and compared to the whiteboard section, the differences between mean course averages and mean final exam scores are not statistically significant (t = .34 for Final Exam; t = .81 for Course Average).[3]

            There were no statistically significant differences among the three sections on Post-Test scores. The biggest gains between pre-test and post-test scores occurred in the 11:00 whiteboard section. The difference in (post-test minus pre-test) was significantly higher in the 11:00 whiteboard section than in the 8:00 Tablet section (t = 1.90) but not significantly higher than in the 12:30 Tablet section (t = .19).

            Thus the evidence does not confirm the hypothesis that Tablet PC presentation increases student performance. Final Exam scores were not significantly higher in the Tablet PC sections than in the marker-and-whiteboard section. The Course Average of one of the Tablet PC sections was significantly higher than the Course Average of the whiteboard section, but the Course Average of the other Tablet PC section was not higher than the Course Average of the whiteboard section. Comparisons of section means do not support claims that use of the Tablet PC increases student learning compared to use of the marker and whiteboard.

 

V. Accounting for Differences in Student Performance

            Student performance presumably depends on a variety of factors, including student ability and effort, not just on the presentation technology. These other factors may mask the effects of Tablet PC presentation on student performance. As the descriptive statistics in Table I show, student characteristics differed sharply among the sections in several respects.

            The percentage of male students in the Tablet sections was much higher than in the whiteboard section; the differences were statistically significant (t = 1.91 for the 8:00 Tablet and the whiteboard section and t = 2.84 for the 12:30 Tablet and the whiteboard section). [4] Tablet sections had a higher mean cumulative GPA at the beginning of the semester than did the marker-and-whiteboard section; the differences were statistically significant (t = 2.17 and 2.85, for the 8:00 and 12:30 Tablet sections, respectively). The 12:30 Tablet section had a much larger percentage of students majoring in Economics or Business (“Business” encompasses the distinct majors of Accounting and Finance; Management; and Marketing) than did the other sections; the differences were statistically significant (t = 2.04 for the 8:00 Tablet section and t = 2.19 for the whiteboard section).

            Other differences among the sections were slight. The 8:00 Tablet section had a higher mean Pre-Test score than did the other sections, but the differences were not statistically significant. There was no difference among sections in the percentage of students who had previously taken an Economics course (either at a 4-year college or at a community college), or in student class attendance, or in the percentage of students who made frequent use of Review Questions.[5]

            To account for the potential impact of these characteristics on student learning, a model of individual student performance was estimated. The model seeks to explain variations in student performance and learning by variations in student ability, student effort, and student background, as well as in the presentation technology.

A large body of literature suggests that academic ability is the most important explanatory variable in determining student learning. Grove, Wasserman, and Grodner (2006) survey this literature and demonstrate that the choice of proxy for academic aptitude can create meaningful differences in estimates of student learning. They conclude that the best control for academic aptitude is collegiate GPA. For this study, each student’s cumulative Grade Point Average prior to the beginning of the Spring 2006 semester was obtained from official university records.[6]

Student effort may also affect student learning (Talley 2005; Krohn and O’Connor 2005). Class attendance is one measure of student effort (Stanca 2006: Marburger 2006). Indeed, the method of presenting class material is irrelevant if students do not attend class. Each student’s class attendance is measured by the percentage of (non-test) class meetings at which the student was present.[7] Another proxy for student effort is student’s self-reported use of the weekly Review Questions that were posted on the course website. Academic major may also affect relative student effort in a particular course. Students majoring in Economics or Business may perceive the Principles of Macroeconomics course to be more “relevant” to their major than, say, Fashion Design majors would, and thus expend greater effort in seeking to master the course material.[8]  

            Student background is measured by a dummy variable indicating whether a student had previously taken an Economics course beyond high school.[9] A dummy variable for gender was added as a control variable, particularly to test the possibility that the presentation technology had a greater impact for some types of students than for others. Finally, a dummy variable was included indicating whether the student was in a section in which the Tablet PC was used.

            The model to be estimated takes the form:

 

            Student Performance = B0 + B1Pre-test + B2Gender + B3Major + B4GPA +     B5PreviousEcon + B6ReviewQuestions + B7 Attendance + B8Tablet

 

The model was estimated by OLS for each of the three dependent variables (Course Average; Final Exam score; Post-Test score). There is no reason to believe that students self-selected into either a Tablet or whiteboard section, as all sections were intended to use the Tablet and all of the instructor’s courses the previous semester had used the Tablet.[10] A potential bias in the OLS estimation may arise with the post-test score as dependent variable, if scores are bunched around the maximum possible score of 15. However, the data show no signs of such bunching.

            Table 2 shows the estimation results for each of three measures of student performance. In each equation, the F-statistic is highly significant, indicating the strength of the explanatory value of each regression. Multicollinearity does not appear to be a problem, as the correlation coefficients between most variables are quite small; the largest correlation is 0.44, between Attendance and GPA.

             Consistent with the results of other studies (see Grove, Wasserman, and Grodner 2006), student ability was an important determinant of student learning. The coefficient on GPA was positive, large, and statistically significant in each of the three equations. Students with higher cumulative GPA prior to taking the Principles of Macro course earned higher grades, ceteris paribus, in the Principles of Macro course.

            The measures of student effort showed little impact on student learning. The impact of Attendance on a student’s Course Average was positive and statistically significant. Attendance had no significant effect on Final Exam score, however, and its estimated effect on Post-Test score, while statistically significant, was negative. Neither the frequent use of Review Questions nor majoring in Economics or Business had a significant impact on any of the measures of student performance.

            Measures of student background also showed little impact on student learning. Having previously taken an Economics course significantly improved a student’s Course Average, but its impact on other measures of student performance was not statistically significant. The Pre-Test score was positively related to the Post-Test score, though its magnitude was small. The Pre-Test score was not statistically significant in the other equations.

            The coefficient on the Gender dummy variable was statistically significant and large Males had higher student performance scores, ceteris paribus, than did females. This result is consistent with the findings of studies examining the impact of gender on learning in Economics; these studies generally find that men outperform women in introductory Economics courses (Krohn and O’Connor 2006).

            The estimation results are not consistent with the hypothesis that Tablet PC presentation improves student performance. The coefficient on the Tablet dummy variable was statistically significant and large in each of the equations; its magnitude was just slightly less than the magnitude of the coefficient on the Gender dummy variable in each of the equations. However, the coefficient on the Tablet dummy variable was negative in each of the equations. Ceteris paribus, Tablet presentation reduced student scores relative to presentation with marker and whiteboard. If presentation of class material with the Tablet PC increases the clarity and organization of lecture presentations; improves instructor response to student questions; and enhances student attentiveness in class and understanding of presented material, those effects do not translate into additions to student learning.

            Perhaps Tablet PC presentation is effective for some types of students but not for others. Dixon, Pannell, and Villinski (2006) suggest that the presentation technology could have different impacts for different groups of students. To test the hypothesis that the effects of presentation technology differ for different types of students, three interaction terms were added to the regression equations. Specifically, the Tablet dummy variable was interacted with the Gender dummy variable, the GPA variable, and the Attendance variable. Tablet presentation may affect gender-based differences in learning styles. Tablet presentation may have differing impacts on students of differing abilities. Tablet presentation may have differing impacts on students of differing effort, as proxied by class attendance.

            Table 3 reports the results of estimating the model of student learning with the interaction terms included. The addition of the interaction terms leaves GPA as the sole variable that is statistically significant in each of the equations. In general, the interaction terms are not significant. The only interaction term to appear statistically significant is the Tablet-GPA interaction term in the Final Exam equation. The coefficient there is positive, suggesting that Tablet presentation improves the final Exam scores of stronger students. This result is at odds with Dixon, Pannell, and Villinski’s (2006) suggestion that the presentation technology may particularly improve the performance of students with learning disabilities.

 

V. Conclusion

The test conducted here focuses only on the presentation features of the Tablet PC. It does not incorporate other uses of the Tablet PC which might have pedagogical value. For example, as the ability to electronically receive, mark, and return student papers is not considered here. Neither does the analysis here consider the use of the Tablet PC in conjunction with other software that enables direct exchange of content between students and instructor and between students and other students. The test conducted here is a test of the relative effectiveness of one instructor’s use of a Tablet PC presentation technology compared to that instructor’s use of more-traditional presentation methods. Still, the evidence presented here fails to support the hypothesis suggested by survey results from previous studies presentation of classroom material with the Tablet PC enhances student learning.

Talley’s (2005) model of student learning suggests one possible explanation for the failure of the results to support the hypothesis. Suppose that students seek to minimize the costs of learning, subject to the constraint of learning enough to achieve a desired grade (rather than seeking to maximize learning subject to the constraint of the costs of learning). Technological innovation that reduces a student’s opportunity cost of learning may thus change that student’s behavior. With the introduction of cost-reducing teaching and technology, some students may choose to seek increased course performance. Other students, however, may choose to pursue the same grade but with less effort and at a lower total cost. Talley (2005, p. 36) concludes that “Cost-reducing innovations in technology for teaching could produce empirical results that appear as failures of the technology to improve teaching and learning when the positive effect on teaching and learning significantly affects too few students to be picked up by the statistics.”

We must be cautious, then, in interpreting the results of analyses that do not support the hypothesis that innovations in teaching technology increase student learning. The failure of the analysis here to find a positive, significant effect of the Tablet PC on student performance does not necessarily mean that replacing the marker-and-whiteboard with the Tablet PC has no impact on student learning. Rather, it may be that the Tablet PC increases the efficiency of student learning. Students may learn the same amount of material but with less effort. Presentation of material with the Tablet PC may make it easier for (some) students to learn material presented in the classroom. Perhaps those students respond to this greater ease of learning in the classroom by reducing effort expended outside the classroom. That is, they may spend less time reading the textbook or reviewing course notes or otherwise studying. The same total amount of learning may be achieved at a reduced expenditure of time and effort. Further research into the effectiveness of Tablet PCs and other instructional innovations should address the efficiency of learning (learning per unit of time and effort) as well as the absolute amount of learning.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

 

Anderson, Richard, Ruth Anderson, Beth Simon, Steven A. Wolfman, Tammy VanDeGrift, and Ken Yasuhara. 2004. “Experiences with a Tablet PC Based Lecture Presentation System in Computer Science Courses.” In Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education. New York: ACM Press.

 

Dixon, Mary, Kerry Pannell, and Michele Villinksi. 2006. “From ‘chalk-and-talk’ to Animate and Collaborate: DyKnow-Mite Applications of Pen-based Instruction in Economics.” In The Impact of Tablet PCs and Pen-based Technology on Education, edited by Dave Berque, Jane Prey, and Robert Reed. West Lafayette, IN: Purdue University Press.

 

Grove, Wayne A., Tim Wasserman, and Andrew Grodner. 2006. “Choosing a Proxy for Academic Aptitude.” Journal of Economic Education 37: 131-147.

 

Hulls, Carol C.W. 2005. “Using a Tablet PC for Classroom Instruction.”  Proceedings of the 35th Annual ASEE/IEEE Conference in Frontiers in Education. Indianapolis.

 

Koile, Kimberle, and David Singer. 2006. “Development of a Tablet PC-based System to Increase Instructor-Student Classroom Interactions and Student Learning. In The Impact of Tablet PCs and Pen-based Technology on Education, edited by Dave Berque, Jane Prey, and Robert Reed. West Lafayette, IN: Purdue University Press.

 

Krohn, Gregory A. and Catherine M. O’Connor. 2005. “Student Effort and Performance over the Semester.” Journal of Economic Education 36: 3-29.

 

Mock, Kenrick 2004. “Teaching with Tablet PCs”. Journal of Computing Sciences in Colleges. 20: 17-27.

 

Marburger, Daniel R. 2006. “Does Mandatory Attendance Improve Student Performance?” Journal of Economic Education 37: 148-156.

 

Schwager, Paul, John E. Anderson, and Richard L. Kerns. 2005. “Faculty Perceptions of Tablet PCs for Teaching, Research, and Service: A College of Business Perspective.” Proceedings of the 2005 Southern Association of Information Systems Conference.

 

Stanca, Luca. 2006. The Effects of Attendance on Academic Performance: Panel Data Evidence for Introductory Microeconomics.” Journal of Economic Education 37: 251-267.

Talley, Daniel A. 2005. “Technology and Teaching: Learning in a High-Tech Environment Revisited.” Paper presented at January 2005 American Economic Association meetings. Retrieved from http://www.homepages.dsu.edu/talleyd/SelectedPapers.htm on November 3, 2006.

 

 

Wise, John C., Roxanne Toto, and Kyu Lon Kim. 2006. “Introducing Tablet PCs: Initial Results from the Classroom.” Proceedings of the 36th Annual ASEE/IEEE Conference in Frontiers in Education. San Diego.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Table 1. Descriptive Statistics.

 

 

Tablet

Whiteboard

 

8:00

12:30

11:00

Variable

Mean

(std. dev.)

Mean

(std. dev.)

Mean

(std. dev.)

Course Average (%)

79.7

(12.4)

84.3

(8.7)

80.3

(10.9)

Final Exam (%)

74.6

(12.5)

80.1

(9.3)

76.7

(11.7)

Post-test (out of 15)

12.8

(1.7)

12.8

(1.6)

12.9

(1.4)

Pre-test (out of 15)

7.9

(2.4)

7.3

(2.3)

7.3

(1.4)

Gender (% male)

55

(50)

65

(49)

33

(47)

Business/Econ major (%)

45

(50)

72

(47)

45

(50)

Cumulative GPA

2.93

(.59)

2.91

(.56)

2.62

(.61)

Taken Previous Econ course (%)

45

(50)

44

(50)

43

(50)

Used Review Questions Frequently (%)

27

(50)

 

31

(48)

28

(45)

Class meetings attended (%)

87.6

(13.1)

90.3

(11.2)

88.9

(10.7)

N

33

36

40

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Table 2. OLS Estimation Results.

 

 

Course Average

Final Exam

Post-test

Intercept

18.74

(0.00)

32.47

(0.00)

9.65

(0.00)

Pre-test

0.09

(0.78)

0.38

(0.38)

0.15

(0.02)

Gender

3.34

(0.01)

5.55

(0.00)

0.92

(0.00)

Major

0.35

(0.78)

-0.36

(0.83)

-0.10

(0.70)

GPA

10.97

(0.00)

12.06

(0.00)

1.36

(0.00)

PreviousEcon

2.46

(0.06)

2.39

(0.17)

0.21

(0.41)

Review Questions

2.03

(0.12)

1.59

(0.37)

0.43

(0.11)

Attendance

0.33

(0.00)

0.07

(0.32)

-0.02

(0.08)

Tablet

-3.03

(0.02)

-4.93

(0.01)

-0.88

(0.00)

 

 

 

 

Adjusted R2

.70

.49

.37

F

32.38

14.13

8.76

N

109

109

109

 

 

Note: p-values are in parentheses.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Table 3. OLS Estimation Results with Interaction Terms.

 

 

Course Average

Final Exam

Post-test

Intercept

16.07

(0.00)

28.91

(0.00)

8.97

(0.00)

Pre-test

0.19

(0.56)

0.41

(0.36)

0.17

(0.01)

Gender

-0.10

(0.96)

4.97

(0.09)

0.50

(0.25)

Major

0.47

(0.71)

-0.37

(0.83)

-0.09

(0.72)

GPA

9.55

(0.00)

11.45

(0.00)

1.13

(0.01)

PreviousEcon

2.16

(0.09)

2.22

(0.21)

0.17

(0.53)

Review Questions

1/80

(0.17)

1.67

(0.35)

0.41

(0.13)

Attendance

0.41

(0.00)

0.13

(0.21)

-0.01

(0.71)

Tablet * Gender

5.08

(0.04)

0.81

(0.81)

0.60

(0.25)

Tablet * GPA

1.70

(0.46)

0.74

(0.81)

0.28

(0.56)

Tablet * Attendance

-0.11

(0.14)

-0.08

(0.43)

-0/02

(0.18)

 

 

 

 

Adjusted R2

0.71

.48

.36

F

27.18

11.05

7.04

n

109

109

109

 

Note: p-values are in parentheses.

 


 

[1] DyKnow Vision is a pen-based collaborative software that enables students to make real-time annotations to class documents and to share information with instructors and classmates during class. The software records class content and class process, so that students can replay their notes to see, say, the development of a graph and not just the finished product.

[2] Since that class met at a popular time, 11:00-12:15 on Tuesdays and Thursdays, I was unable to locate an alternative classroom that had Tablet-wireless projector capability.

[3] The higher Final Exam score in the 12:30 section suggests that transfer of test information among students in different sections is not the reason for the higher Course Average (including the scores on the three intra-semester tests) in the 12:30 section. The university-wide final exam schedule for that semester placed the 12:30 section exam before the exams in the other sections.

[4] Overall undergraduate enrollment at this university is 61% female and 39% male.

[5] Each week Review Questions were posted on the course web site. Students were not required to turn in answers to these questions; nor were Review Question specifically addressed in class. Rather, students were advised that reviewing these questions would be helpful in preparing for quizzes and tests. A survey question on the final exam asked students to report how often they “looked over” the review questions; those who reported that they looked over the questions “every week” or “almost every week” were counted as “frequent” users.

[6] Three students were who first-semester transfers in the Spring 2006 semester, and thus who had no cumulative GPA at the university, were excluded from this study.

[7] Of the 30 scheduled class meetings during the semester, three were devoted to tests and one was canceled as I attended a conference. I have attendance data for 25 of the 26 non-test class meetings, collected by means of a sign-in sheet circulated at the beginning of each class meeting. Students who arrived at class late knew to find the sign-in sheet at the end of class to add their name. Student attendance did not contribute directly to grading, though students were informed that the instructor would subjectively consider attendance for students whose overall course average left them on the border between two grades.

[8] Information on the use of Review Questions, major, and Previous Economic courses was self-reported by students on a questionnaire attached to the final exam. Data on several students who chose not to report this information were omitted from the analysis.

[9] Macro is ordered before Micro in the university’s course-numbering system. However, the sequence in which students take these two courses (if indeed they take both) is not regulated. Some students in Macro principles have taken Micro principles; some have not.

[10] One student did move from a Tablet section to a whiteboard section, but cited schedule conflicts rather than presentation methods as the reason for switching.