research article an overview of experimental and quasi ... · boettger and lam: overview of...

22
272 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013 Research Article An Overview of Experimental and Quasi-Experimental Research in Technical Communication Journals (1992–2011) —RYAN K. BOETTGER, MEMBER, IEEE, AND CHRIS LAM Abstract—This study explores a comprehensive sample of experimental and quasi-experimental research within ve leading technical communication journals over a 20-year period. Exploratory studies can overview how a method has evolved within a eld, highlighting how it has advanced understanding of communication and identifying areas for further inquiry. Research questions: (1) How has experimental research in technical communication journals developed over the 20-year period? Specically, how much is being published, which journals publish experiments, what topics are being explored, and what elds are informing this research? (2) What content characterizes experimental research in technical communication? Specically, how explicit are the research questions/hypotheses, are the results of pilot studies reported, what are the sample sizes and populations used, and what measures do researchers use? (3) Who publishes experimental research in technical communication? Specically, which authors and afliates are most associated with experimental research, and how does the sample’s gender and authorship distribution compare to existing research? Literature review: We rst address how scholars have assessed research in technical communication and how these ndings implicate experimental research. We then review features of other exploratory studies that inform this study’s design. Methodology: We conducted a quantitative and qualitative analysis of 137 experiments, a comprehensive sample identied from a corpus of 2,118 refereed papers published from 1992 to 2011. We coded 14 variables related to the causal relationships that the experiments addressed and who produced the research. We subjected the data to multiple statistical measures, including contingency table analysis and correspondence analysis. Results and conclusions: Over the 20 years, the journals published experimental research at a consistent rate. This could indicate that these methods have a stable presence in the eld, or a discouraging sign that output is not on the rise despite calls from leading scholars. (TPC) emerged as a strong producer of experiments, publishing 45% of the sample. TPC was also associated with most recent experiments, assuming this role from , which was associated with early experiments. In addition, TPC, , and correlated with experiments on collaboration, pedagogy, and intercultural communication, respectively. The results also revealed that recent experiments reported signicantly more explicit research question/hypotheses and pilot studies, an encouraging sign for the quality of future experiments. Finally, Spyridakis published the most experiments over the past 20 years, and researchers at the University of Washington and the University of Twente were the top afliates associated with output. The conguration of both of these institutions’ programs, which seem to align with a traditional science model, might suggest how the evolution of technical communication programs impacts the type of research that its afliates produce. Our results are limited by the small, though comprehensive, sample and the exploratory natures of measures like correspondence analysis. Future research could use the proposed framework to investigate the evolution of other research methods in technical communication, strengthening our body of knowledge. Index Terms—Correspondence analysis, experiments, quasi-experiments, research methods, technical communication. As professional and technical communication stabilizes as an academic discipline, attention on how we conduct our research remains paramount. The eld’s scholars have called for more rigorous, coherent, and systematic research as well as stronger questions and methods to issues related to the eld [1]–[6]. Investigations on the state of research in technical communication have revealed Manuscript received February 20, 2013; revised August 16, 2013; accepted August 17, 2013. Date of current version November 20, 2013. The authors are with the Department of Linguistics and Technical Communication, University of North Texas, Denton, TX 76203 USA (email: [email protected]; [email protected]). IEEE 10.1109/TPC.2013.2287570 future challenges but have also identied our interdisciplinarity, particularly our methodological plurality, as a positive contributor to the eld’s scholarship. Nevertheless, experimental approaches to research remain underused and underexplored. Experiments test causal relationships, and the results help generalize our understanding of communication practices. We dene true experiments as systematic investigations into the possible causes of a phenomenon. This process traditionally tests a hypothesis by manipulating at least one independent variable within a group of randomly assigned subjects in a controlled environment [7]. In contrast, quasi-experiments (or natural experiments) consist of already established groups and occur in natural settings, such as a 0361-1434 © 2013 IEEE

Upload: dangngoc

Post on 13-Aug-2019

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

272 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

Research Article

An Overview of Experimental and Quasi-ExperimentalResearch in Technical Communication Journals (1992–2011)

—RYAN K. BOETTGER, MEMBER, IEEE, AND CHRIS LAM

Abstract—This study explores a comprehensive sample of experimental and quasi-experimental research withinfive leading technical communication journals over a 20-year period. Exploratory studies can overview how amethod has evolved within a field, highlighting how it has advanced understanding of communication andidentifying areas for further inquiry. Research questions: (1) How has experimental research in technicalcommunication journals developed over the 20-year period? Specifically, how much is being published, whichjournals publish experiments, what topics are being explored, and what fields are informing this research? (2) Whatcontent characterizes experimental research in technical communication? Specifically, how explicit are the researchquestions/hypotheses, are the results of pilot studies reported, what are the sample sizes and populations used,and what measures do researchers use? (3) Who publishes experimental research in technical communication?Specifically, which authors and affiliates are most associated with experimental research, and how does thesample’s gender and authorship distribution compare to existing research? Literature review: We first addresshow scholars have assessed research in technical communication and how these findings implicate experimentalresearch. We then review features of other exploratory studies that inform this study’s design. Methodology: Weconducted a quantitative and qualitative analysis of 137 experiments, a comprehensive sample identified froma corpus of 2,118 refereed papers published from 1992 to 2011. We coded 14 variables related to the causalrelationships that the experiments addressed and who produced the research. We subjected the data to multiplestatistical measures, including contingency table analysis and correspondence analysis. Results and conclusions:Over the 20 years, the journals published experimental research at a consistent rate. This could indicate thatthese methods have a stable presence in the field, or a discouraging sign that output is not on the rise despitecalls from leading scholars. (TPC) emerged as a strong producer ofexperiments, publishing 45% of the sample. TPC was also associated with most recent experiments, assuming thisrole from , which was associated with early experiments. In addition,TPC, , and correlated with experimentson collaboration, pedagogy, and intercultural communication, respectively. The results also revealed that recentexperiments reported significantly more explicit research question/hypotheses and pilot studies, an encouraging signfor the quality of future experiments. Finally, Spyridakis published the most experiments over the past 20 years,and researchers at the University of Washington and the University of Twente were the top affiliates associatedwith output. The configuration of both of these institutions’ programs, which seem to align with a traditional sciencemodel, might suggest how the evolution of technical communication programs impacts the type of research that itsaffiliates produce. Our results are limited by the small, though comprehensive, sample and the exploratory naturesof measures like correspondence analysis. Future research could use the proposed framework to investigate theevolution of other research methods in technical communication, strengthening our body of knowledge.

Index Terms—Correspondence analysis, experiments, quasi-experiments, research methods, technicalcommunication.

As professional and technical communicationstabilizes as an academic discipline, attention onhow we conduct our research remains paramount.The field’s scholars have called for more rigorous,coherent, and systematic research as well asstronger questions and methods to issues relatedto the field [1]–[6]. Investigations on the state ofresearch in technical communication have revealed

Manuscript received February 20, 2013; revised August 16,2013; accepted August 17, 2013. Date of current versionNovember 20, 2013.The authors are with the Department of Linguistics andTechnical Communication, University of North Texas,Denton, TX 76203 USA (email: [email protected];[email protected]).

IEEE 10.1109/TPC.2013.2287570

future challenges but have also identified ourinterdisciplinarity, particularly our methodologicalplurality, as a positive contributor to the field’sscholarship. Nevertheless, experimental approachesto research remain underused and underexplored.

Experiments test causal relationships, andthe results help generalize our understandingof communication practices. We define trueexperiments as systematic investigations into thepossible causes of a phenomenon. This processtraditionally tests a hypothesis by manipulatingat least one independent variable within a groupof randomly assigned subjects in a controlledenvironment [7]. In contrast, quasi-experiments (ornatural experiments) consist of already establishedgroups and occur in natural settings, such as a

0361-1434 © 2013 IEEE

Page 2: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273

classroom or a workplace. As a result, researchersmust establish between-group equality beforeintroducing a treatment and offer a hypothesis toaccount for an ineffective treatment and threats tointernal validity [8], [9].

Previous research in the field demonstrates ananemic record of experimental output. Less than 1%of the proceedings from the 1972–1991 conferenceshosted by the Society for Technical Communicationwere experimental [10]. None of the 178 technicalcommunication dissertations written from1989–1998 employed a true experimental design,and only 3.9% employed a quasi-experimentaldesign [11]. Technical communicators recentlyidentified using 25 different methods, but only 8.6%of respondents indicated a use of experimentalmethods [12]. For this study, we identified acomprehensive sample of 137 experiments, whichwas 6.47% of all the refereed papers 2,118)published over 20 years.

Spyridakis wrote that experiments are important tothe topics that we investigate [13] and respond towhat Charney defined as the purpose of technicalcommunication research:

to promote text designs that are easy for readersto use, to acculturate students into professionaldiscourse communities, and to identify andpromote effective and ethical communicationpractices in the workplace. [7, p. 111]

Charney’s defined purpose highlights that technicalcommunication must encompass the theoreticaland applied perspectives of inquiry, similar topsychology, engineering, and human resources.Technical communication research not onlyinforms our scholarly community and influencesour classroom instruction but guides practitionerstoward their profession’s most effective standardsand practices. The professional nature of the fieldrequires that these standards and practices becontinuously evaluated, tested, and refined.

A popular means for achieving this goal is throughsurvey, a method that relies on the self-report ofparticipants and often a relatively small sample.MacNealy found that 74% of the proceedingsrelated to Society for Technical Communicationhosted conferences included a survey comparedto the less than 1% that were experimental [10].MacNealy reiterated that the public has becomesaturated with surveys, making it difficult togeneralize results. Almost 20 years later, Eaton andher colleagues noted the consistently low surveyresponse rates within technical communication,

citing the dwindling participation in the Society forTechnical Communication’s Annual Salary Survey[14]. Surveys record necessary and importantinformation on communication trends, but, like anymethod, offer a single perspective that should besupplemented with other approaches. Alternatively,fields like psychology have historically relied onexperimentation to guide their practitioners’ bestpractices.

In this paper, we explore a comprehensive sampleof 137 experiments published from 1992–2011within five leading technical communicationjournals. Our study identifies features related tothe quantity and quality of experimental researchas well as how these features contribute to acoherent body of knowledge. Exploratory studieslike the present overview how a method has evolvedwithin a field. The results highlight how the methodhas informed communication practices and identifyareas for further inquiry. Though our investigationis limited to experimental rather than all empiricalmethods, we believe the results provide insight ona necessary contributor to the future of technicalcommunication scholarship.

To guide this exploration, we posed three researchquestions: (1) How has experimental research intechnical communication journals developed overthe 20-year period? Specifically, how much is beingpublished, which journals publish experiments,what topics are being explored, and what fieldsare informing this research? (2) What contentcharacterizes experimental research in technicalcommunication? Specifically, how explicit are theresearch questions/hypotheses, are the results ofpilot studies reported, what are the sample sizesand populations used, and what measures doresearchers use? (3) Who publishes experimentalresearch in technical communication? Specifically,which authors and affiliates are most associatedwith experimental research, and how does thesample’s gender and authorship distributioncompare to existing research?

This paper has the following structure. Theliterature review synthesizes recent and relevantdiscussions on the general research practicesin technical communication. The methodologysection describes how the study was conducted,beginning with identifying the sample and followedby describing the measures used to explore thesample. The results section reports the quantityand quality characteristics of experimental researchin technical communication. In the final section,we examine how our results contribute to the field’s

Page 3: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

274 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

body of knowledge, acknowledge the limitationsof the study, and propose directions for futureresearch.

LITERATURE REVIEWThis section first describes the theoreticalorientation that guided our approach as well ashow we selected the literature to review. We thensynthesize the major findings from this literatureand relate the broader themes to the presenceof experimentation in technical communication.Finally, we describe several exploratory studieswithin and outside the field that informed thedesign of the present study.

Theoretical Orientation The design of this studywas motivated by the consistent findings on thestate of research in technical communication aswell as the past histories of related fields likecomposition. Over the last three decades, scholarshave identified technical communication as anevolving academic discipline. As described later inthis section, we found that many of these studiesoffered similar results: technical communication isa methodologically diverse field but could benefitfrom more focused questions and its researchersfrom different methods training (such as [2], [4],and [6]).

These study’s results also noted concernsamong prominent scholars like Charney, whoquestioned if the field is matching the rightmethod with the right question or defaulting tothe method the researcher is more comfortableusing [5]. Respondents to a recent questionnaireindicated that most technical communicatorscholars primarily employed qualitative researchmethods that focused on discourse and texts andhistorical research, suggesting that the field’smethodological plurality does not necessarilyemphasize quantitative methods [12]. Concernedwith similar trends, Haswell traced the decline ofreplicable data-supported research in NCTE/CCCC,the two flagships of postsecondary education incomposition [15]. The absence of empirical methodsin writing-based scholarship is pronouncedand indicates why some foundational questionsremain unanswered. If qualitative methods yieldhypotheses and quantitative methods test them, afuture technical communication inquiry may needto expand the depth of its methodological plurality.When he launched Written Communication in1984, Stephen Witte warned of the dangers of amethodological imbalance: “A field that presumesthe efficacy of a particular research methodology,

a particular inquiry paradigm, will collapse inwardupon itself” [15, p. 220].

Selection of Literature to Review We began byre-reading the research that challenged and shapedour own perspectives as technical communicationresearchers (such as [5], [13], and [15]). Theseimportant pieces informed the study’s questionsbut also reminded us of the exigencies theseauthors placed on enhanced methods training,particularly with quantitative approaches. UsingGoogle Scholar and the electronic databases of thefive primary technical communication journals, wethen evaluated the abstracts of the articles thatcited these important pieces. This approach allowedus to assess how current technical communicatorswere synthesizing these ideas. We selected relevantliterature that was published within the study’sdesignated time period (1992–2011) to contextualizethe development of experimental research asreported in the results section. We focused ourreview on literature published in journal articlesbecause technical communication is arguably ajournal-oriented as opposed to a book-orientedfield. Collectively, we categorized most of thisresearch as: (a) studies that surveyed the field’sthought leaders on general research practices [2],[6], [12]; (b) studies that reviewed the quantity andquality of research output [10], [11], [16], [17]; or(c) tutorials or case studies on specific methods [1],[3], [13].

Findings on Research Practices in TechnicalCommunication The results have revealed relatedstrengths and concerns that implicate the currentstate of experimental research.

Methodological Plurality: In the most recentoverview of the field’s research practices, scholarsprimarily characterized our methodological pluralityas a strength [2]. Carliner et al.’s study identifieda range of methods employed in the researchpublished within four technical communicationjournals, including experiments, case studies,document reviews, and experience reports [17].As mentioned earlier, technical communicatorsidentified the use of 25 different method types,primarily qualitative research focused on discourseand texts and historical research [12].

This plurality also extends to the theoriesand content areas that inform our research.Many academics and practitioners discovertechnical communication after careers in humanfactors, human resources, public relations, andbusiness management, and it is common for

Page 4: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 275

these experiences to inform future research[2, p. 82]. Rude acknowledged that technicalcommunicators borrow methods, theories, andcontent areas but also stressed the importanceof establishing a separate identity that solidifiesour value to others [4]. She proposed a uniqueresearch question centered on texts as well asfour areas of related questions to help scholarsachieve this goal. Alternatively, Charney expressedconcerns about separating too much from fields,such as rhetoric and composition, where manycurrent scholars developed their theoretical andpedagogical foundations [2]. Maintaining thefield’s encompassing approach to research whilesimultaneously establishing a unique identityinvites its own research challenges.

Research Challenges: Research challenges facingthe field can be connected to the general outputof empirical research like experiments as wellas observations about the general practices andtraining of our scholars.

Generally, there appears to be a dearth of empiricalresearch in technical communication. In 1992,MacNealy reported that of the 3,479 entries inconference proceedings over the last 20 years, only148 were empirical. She wrote that the focus ofthis research was on “personal and often limitedexperiences and preferences” rather than thequantity, quality, and coherence of this research[10, p. 533]. MacNealy conceded that anecdotalresearch was common for a developing disciplinelike technical communication; however, our currentscholars are offering the same observations twodecades later. Eaton classified a large amount oftechnical communication research as a collectionof “cup of coffee articles” because the resultswere only as useful as having a cup of coffee withsomeone and discussing an experience [18, p. 9].Similarly, Carliner et al. recently reported that atleast three of the leading technical communicationjournals published a large amount of first-personexperience reports and document reviews,which are often research based but not alwaysempirical [17]. IEEE TRANSACTIONS ON PROFESSIONALCOMMUNICATION (TPC) was the only journalthat consistently published empirical research,primarily experiments, surveys, and tutorials. Thetypes of research we choose to produce can signifyhow successfully the field has moved toward fullprofessional status.

Professional Status: The goal of obtaining fullprofessional status returns to the idea of technicalcommunication as an evolving discipline in need

of a coherent body of knowledge, which can unitea field and establish its identity [19]. But Charneyconcluded that the overlap among technicalcommunication projects was “insufficient foreither building on or challenging published work”[2, p. 77]. Recent research has focused on theprofessionalization of technical communication,including the significant strides the STC has madein establishing its own online Body of Knowledgerepository [20], [21]. However, at the time thispaper was revised, no author had contributedcontent to the “Quantitative Methods” page, whilethe “Qualitative Methods” and “Usability Research”pages included substantial content contributions[22].

Features From Exploratory Studies Tore-examine how experimental methods couldcontribute to technical communication’s bodyof knowledge, scholars must first inventory theexisting research. Exploratory studies offer insightsinto how a field or phenomena has evolved. Weidentified no literature that measured how technicalcommunicators applied a specific method over time.However, several studies within and outside thefield have explored bodies of research and identifiedcharacteristics related to timeframe, the sample,and authorship that would inform such a study.

Timeframe: All of the exploratory studies thatwe reviewed investigated a topic within a definedtimeframe. For example, Rainey’s two studies ondoctoral dissertations in technical communicationspanned over 30 years, which allowed for alongitudinal review of how research topics andmethods evolved [11], [16]. Juzwik et al. were moreinterested in an overview of current writing studiesresearch, so they limited their investigation to a6-year period [23]. The timeframe for these threestudies also allowed the researchers to analyze acomprehensive dataset rather than subject the datato various sampling methods that might reduce thegeneralizability of the results.

Sample Characteristics: Previous studies havecollected a variety of metadata on their samplethat might suggest the characteristics inherent toquality research. For example, research topic andmethod were popular variables to explore becausethey could suggest scholarly productive areas aswell as areas in need of inquiry [17], [23]–[27].Researchers also identified characteristics of theirsample, including information on human subjects,sample size, and population (such as studentsand practitioners), and the statistical measuresreported [23], [24], [26]. Identifying these variables

Page 5: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

276 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

could help validate a study’s results as well assuggest the quality of the research.

Authorship Characteristics: Previous exploratorystudies have also focused on the scholars producingresearch, including author collaboration, gender,and affiliation. Two technical communicationstudies offered insight into how these variablesshaped the research in two leading journalsand offer a baseline for exploring the samecharacteristics in experimental research.

An analysis of a quarter century of TPC papers(identified through stratified random sampling)found that 63% of the papers were single-authoredand 37% were co-authored [25]. More authors weremale than female (61% compared to 39%), butfemale authorship increased over this timeframe.This analysis also recorded an increase ininternational authorship since 1996 as well as ahigh percentage of university (rather than industryor government) affiliations.

During the five years that Burnett edited Journalof Business and Technical Communication (JBTC),most papers were single-authored rather thanco-authored (78% compared to 22%) [28]. Therewas also an uneven gender distribution amongauthors (62% females compared to 38% males).The slightly increased presence of collaboration inTPC and the reversed gender distribution betweenboth journals are interesting, but not directlycomparable due to the timeframes for analysis.

Questions Generated by the Literature ReviewIn the literature review, we identified severalfeatures and ideas that might suggest the quantityand quality of the experimental research publishedin technical communication journals over a 20-yearperiod. To better organize these ideas, we developedthe following questions:

RQ1. How has experimental research intechnical communication journals developedover the 20-year period? Specifically, how muchis being published, which journals publishexperiments, what topics are being explored,and what fields are informing this research?

RQ2. What content characterizes experimentalresearch in technical communication?Specifically, how explicit are the researchquestions/hypotheses? Are the results of pilotstudies reported? What are the sample sizesand populations used? and What measures doresearchers use?

RQ3. Who publishes experimental research intechnical communication? Specifically, whichauthors and affiliates are most associatedwith experimental research, and how does thesample’s gender and authorship distributioncompare to existing research?

METHODOLOGY

This section justifies our research methodologyand outlines the sample selection and collectionprocess. We conclude with a description of themeasures used to explore the final sample of 137experimental pieces.

Choice of Research Methodology As reportedin the literature review, a substantial amountof technical communication research has beencategorized as anecdotal, which limits how resultscan be extended or challenged. Therefore, wedevised a quantitative, data-driven study so thatthe results and the design could inform futureresearch. This approach allowed us to makemore confident claims about the development ofexperimental research in technical communicationjournals. We also examined the data qualitatively toillustrate the correlations and significant findingsof our results.

Choice of Samples to Study The selection ofour sample included identifying the journals,the timeframe for analysis, and the experimentalpieces.

We selected five journals for this study: TechnicalCommunication (TC), Technical CommunicationQuarterly (TCQ), JBTC, TPC, and Journal ofTechnical Writing and Communication (JTWC).These journals were previously identified as theleading publications in our field [29]; however,other studies have used a different combination ofjournals for analysis [17], [30].

Next, we identified the timeframe for analysis.Since we focused our inquiry on experimentalresearch published in technical communicationjournals, we found that the content and argumentsoffered in Spyridakis’ TC tutorial on experimentalresearch, as well as her own record of applyingthese methods, provided a strong catalyst forbeginning our analysis in 1992. Spyridakis’defined audience—current and future experimentalresearchers and researchers who needed to learn tointerpret experimental findings—proved unique andpreceded other important scholarship by MacNealy,Charney, and others. We concluded the timeframe

Page 6: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 277

TABLE IVARIABLE AND VARIABLE LEVELS CONSIDERED IN THE PRESENT STUDY

in 2011, which, at the time of coding, provided thelatest complete volume of each journal.

Finally, we determined what content to include inthe sample. We selected only peer-reviewed content,excluding book reviews, editorials, summaries ofresearch published elsewhere, article reprints,and similar types of content that editors did notsend for peer review. These parameters yielded acorpus of 2,118 articles, which we then culled forexperimental pieces.

Our institution’s library housed digital versionsof most of the 1992–2011 issues from these fivejournals (though we did have to locate paper copiesof a few early issues of TCQ). One of us identifiedthe experimental sample from reading the articleabstract, introduction, and methodology section(if present) of the entire corpus. This approachproved time consuming because authors did notalways identify their method appropriately orexplicitly. One researcher, for example, describedher study as a textual analysis in the abstract butthen as an experiment in the methods section.Other researchers clearly employed experimentalmethods but never identified their approach assuch. To account for these discrepancies, one of

us then independently cross-checked the sampleby identifying experimental pieces via keywordsearches in the journals’ respective electronicdatabases. This process entailed searchingthe full-text of articles for “experiment” and“experimental” as both returned different results.We both discussed every article before including itin the final sample.

These approaches yielded a sample of 137 papersthat used experimental methods, including 108experiments, 11 pilot/exploratory experiments, 10undefined experiments, and 8 quasi-experiments.The 10 undefined studies included experimentalapproaches but were not explicitly labeled assuch. The total sample equated to approximatelyseven experimental pieces a year and 6.47% of allrefereed content published in the journals withinthe designated timeframe.

How Data Were Collected Once we identifiedthe sample, we manually coded the 137 papersfor 14 variables: Journal, Year, Author, Gender,Affiliation, Affiliation Type, World Region, Topic,Origin, Hypothesis/Question, Pilot Study, SampleSize, Sample Type, and Measures. Table I providesa description of each variable and its levels.

Page 7: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

278 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

These variables and levels were selected based ontheir presence in previous studies as well as areaswe believed important to experimental research[17], [23]–[27], [30]. We each coded half of thesample independently and then coded 20% of eachother’s sample to ensure reliability. A kappa testidentified an overall between-rater agreement of79.8%. Both of us discussed and reconciled anycoding discrepancies.

A category that proved difficult to code was Topic.We coded a small sample of articles using fourdifferent coding schemas before arriving at the finalapproach. The topics listed in the STC Body ofKnowledge (used in [17]) and the eServer TechnicalCommunication Library (http://tc.eserver.org)were too broad for our purposes. Similarly, wefound the topics that authors used to identifytheir manuscripts for publication considerationin JBTC and TCQ were incomplete and often toospecific. Many of the TCQ topics, for example,included the word “theory,” matching this journal’seditorial scope, but not ours. We devised our ownlist of 13 topics based on our analysis of the datacorpus and discussed each article collaboratively.To establish mutual exclusivity, we only codedthe primary topic for each article, which we oftendetermined by identifying the dependent variable inthe experimental study.

How Data Were Analyzed The majority of ourdata were categorical, which limited the types ofstatistical analyses we could apply. In additionto analyzing the data with simple descriptivestatistics, we used contingency table analysis andcorrespondence analysis. Both measures addedgreater depth to the results.

Contingency table analyses correlate multivariatefrequency distributions, allowing researchers tostatistically compare distributions of categorical,or non-numerical, data. We primarily ranbinomials, a type of contingency table analysisthat tests the statistical significance of deviationsfrom theoretically expected distributions in twocategories. Additional measures like Pearson’schi-square then determined whether proportionsin the table’s cells significantly differ. When asignificant chi-square value is found, we concludethe variables being examined are contingent and,therefore, not independent. We also conductedfollow-up pairwise tests to examine the nature ofsignificant contingency tables (such as comparingindividual cells in the table). Contingency tableanalyses are a widely used statistical tool for

categorical data and were also used in Martin et al.,who studied variables similar to the ones addressedin this study [26].

Correspondence analysis (CA) is a geometrictechnique used to analyze two-way and multiwaytables containing some measure of correspondencebetween the rows and columns [31], [32]. Theapproach produces results comparable to principalcomponents analysis (PCA) or factor analysis butis designed for non-numeric data. CA is widelyused in corpus linguistic, marketing, and ecologicalresearch, but to our knowledge, this is the first timeit has been applied in technical communication.Due to its exploratory approach, CA is not a methodused to test hypotheses. Instead, it reveals patternsin complex data and provides output that can helpresearchers interpret these patterns. The mostuseful component of CA is its ability to visuallyorganize the data in the categories into central andperipheral instances. The increasing distance ofany representative of either category from the origincorresponds to a higher degree of differentiationcompared with the other members with respectto their co-occurrences with the data in the othercategory. These analyses were run in [33], and allCAs were run in R using the “ca” package [34]. Wedescribe all of these measures with more depth inthe results section.

RESULTS

This section organizes the results of the study bythe three research questions.

RQ1. How has experimental research intechnical communication journals developedover the 20-year period? Specifically, how muchis being published, which journals publishexperiments, what topics are being explored,and what fields are inspiring this research?

We identified a sample of 137 experimentspublished over 20 years within the five leadingtechnical communication journals. The years1996, 1999, 2004, and 2006 published the largestamount of experiments . The least amountof experiments appeared in 2003 3) and wereall published in TPC. Overall, TPC published themost experiments 61), which also accountedfor 45% of the total sample. JTWC published thesecond most experiments 32), followed by TC

26), JBTC 14), and TCQ 4). Table IIprovides the frequencies of these experiments bythe five journals and the 20-year period.

Page 8: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 279

TABLE IIFREQUENCIES AND DISPERSIONS OF THE EXPERIMENTAL SAMPLE ACROSS 20 YEARS WITHIN THE FIVE JOURNALS

Journal and Year Due to the size of the sample,we grouped the data into four, five-year periodsto reveal more meaningful trends. Nearly all ofthese periods produced the same amount ofresearch ( 34.25, 3.20), with the mostexperiments appearing within 2002–200635). A contingency table analysis of the frequencyof experiments for each five-year period foundno statistical significance. This indicates thatthe output of experimental research in journalshas remained consistent over the 20 years. Theconclusion section addresses this finding in moredepth.

To further identify associations between Journaland Year, we subjected the data to a CA. CA is notan inferential measure and, therefore, does notdetermine statistical significance. The statisticaloutput provides a chi-square value, but this valuerelates to the overall interaction between the rowsand columns; it is up to the researcher to consultother statistical output to properly interpret theresults. Throughout this section, we report onlyCAs that had a significant chi-square value of0.05, but we reviewed other output to determine

between-variable relationships.

For illustrative purposes, we describe the outputused to interpret the CA between Journal and Year(Fig. 1). In CA, interpretation is typically restrictedto the first two dimensions. The eigenvalues for thefirst two dimensions of Fig. 1 are 84.7% and 14.1%,respectively, indicating that the visualizationexplains 98.8% of the variation (inertia). The tableused to produce this analysis includes many cells.Appendix A provides the numerical summary.

To ensure a reasonable degree of accuracy in theanalysis, the quality score of any data set (qlt)should be more than 500. A figure of 500 indicatesthat 50% of the inertia for that data point lies off

Fig. 1. Correspondence analysis of Journal and Year.

principle axes and, therefore, that point is lessaccurately displayed in the plot. (See [32] and [35]for more details.) A low-quality score for any givenrow or column suggests that the interpretation ofits position on the plot should be appropriatelyhedged. For example, the output visualized in Fig. 1included a quality score of 996 (or 99.6%) for thejournal JTWC. Likewise, the 1992–1996 year rangehad a quality score of 998 (or 99.8%). Both scoressuggest a strong level of accuracy in their visualdisplay. Similarly, the total inertia values (inr)relate to quality. JTWC has an inertia value of 479.So, given that the plot in Fig. 1 captures 98.8%of the inertia (such as distribution and variationof the data), JTWC then accounts for 48.5% ofthe structure of the plot 47.9/98.8 100 . Weconsulted the quality and inertia scores for theother two CAs reported in this section.

As written before, the strongest correlation in thisfirst CA was between JTWC and the 1992–1996experiments. The proximity of these two points on

Page 9: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

280 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

Fig. 2. Correspondence analysis of Topic and Journal.

the top-right quadrant of Fig. 1, therefore, indicatesa relationship between the variables. JTWCpublished 18 experiments in these years (Table II),which were the most experiments published inthis timeframe. As a comparison, JBTC and TPCwere the second most frequent publishers with fiveexperiments each. The experiments published inJTWC during these five years also accounted for56.25% of the total experiments that the journalpublished over 20 years.

A second correlation was found between TPC andthe experiments published in the last ten yearsof our timeframe (2002–2006 and 2007–2011).TPC published 43 experiments during these twofive-year periods (23 and 20), which accountedfor 70.49% of the total experiments this journalpublished. This correlation in relation to theJTWC/1992–1996 experiments is noteworthy. TPCand JTWC are plotted on opposite sides of Fig. 1.The distance may suggest an evolutionary change;TPC is strongly associated with recent experimentsbut weakly associated with early experiments andvice-versa for JTWC. Similarly, the isolation of TCQand JBTC in relation to the other journals andyears indicates the strength of their associationswith experimental research.

A third correlation was found between TC andthe 2002–2006 experiments (top-left quadrant ofFig. 2). TC published the second highest number ofexperiments during this period ( 11 comparedto TPC’s 23). However, the three other journalspublished a combined total of five experiments.TPC and TC appeared to be the leading publishersof experiments during this time.

Topic We also investigated which topics werebeing explored with experimental methods and thedistribution of the 13 topics across the five journals.

A contingency table analysis determined howevenly distributed the experimental topics wereacross the sample. We found no literature thatweighted the importance of individual topics totechnical communication; therefore, the nullhypothesis assumed that if all topics were evenlydistributed, 10.5 experiments on each topic wouldhave appeared over the 20-year period. Thisnumber was derived by dividing the total number ofexperiments in our sample by the number of topics(137/13). Table III summarizes these topics andtheir observed frequencies as well as how dispersedthe topics were among the five journals.

The contingency table analysis revealed thatexperiments on comprehension, technology, andgenre were published with a higher-than-expectedfrequency. These three topics comprised 50%of the total sample and were dispersed withinfour of the journals. To provide more meaningfulresults on this substantial portion of the data, wesubcategorized the experiments on these threetopics.

The 24 experiments on comprehension wereclassified as either visual or text based. Seventy-fivepercent of these experiments were text-basedand covered a variety of ways that readersidentified and retained information. For example,Spyridakis and Fukuoka examined Americanand Japanese readers’ comprehension of andpreference for expository texts that contained athesis that was organized either inductively ordeductively [36]. Results indicated that Americansrecalled information equally well with eitherorganizational structure, but that Japanesereaders recalled more information from deductivelyorganized texts. Twenty-five percent of thecomprehension experiments were visual based. Asan example, Williams and Spyridakis examinedhow typographical and formatting tools of headingssignaled the structure of text and, thus, theauthor’s perspective [37]. Among the results werethat readers comprehended visual discriminationsamong headings with fewer rather than moredimensions, and that size was the most significantvisual cue to a heading’s hierarchical position.

The 23 technology-themed experiments coveredseveral content areas. A majority of these studiesexamined how technology informed learning, howtechnology impacted communication, or how the

Page 10: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 281

TABLE IIIFREQUENCIES AND CONTINGENCY TABLE ANALYSIS RESULTS OF TOPIC AND JOURNAL

design of technology related to its use. For example,Amare found that technical writing studentspreferred PowerPoint-based lectures, but studentswho learned via a traditional lecture formatperformed significantly better in their coursework[38].

Finally, 77% 17) of the genre experimentsfocused on procedures. One recent study was aquasi-experiment that explored how participantsused written instructions prior to contact with theappliance or while carrying out the designated taskas well as how different formats impacted howquickly readers completed tasks [39]. The resultsindicated that 90% of participants consulted theinstructions at some point during their interactionwith the appliance, and that participants whoused text-and-picture instructions completed theirtasks in the least amount of time. Other genresexplored via experiments were correspondence,forms, reports, and resumes.

A contingency table analysis also indicated thatexperiments on collaboration, communicationstrategies, editing and style, and pedagogy werenot significantly distributed because the individualfrequencies were too close to the expected frequencyof 10.5. Therefore, experiments on these topicsappeared as expected if each topic was to have equalrepresentation. Combined, these topics comprised35% of the total sample and were dispersed withinthe journals with the most variation.

Experiments on assessment, visual design,knowledge management, interculturalcommunication, research design, and gender were

published at a lower-than-expected frequency.Combined, these topics comprised 15% of the totalsample and were dispersed within only one or twoof the journals.

Topic and Journal Our second CA revealedstrong correlations between individual topics andjournals (Fig. 2). Since the 2-D analysis accountedfor only 70.0% of the inertia (Dim 1: 37.9%, Dim2: 32.1%), we also considered the third dimension(Dim 3: 15.3%) to represent 85.3% of the inertia. Insimple binary CA, additional dimensions shouldbe included if the combination of the first twodimensions is not more than 75% [35]. Fig. 3illustrates the first two dimensions; illustrationsof Dims 1 and 3 and Dims 2 and 3 as well as thenumerical summary are supplied in Appendix B.

The strongest correlation existed between pedagogyand JBTC (top-left quadrant of Fig. 2). The journalpublished only 10% of the experiments in thisstudy’s sample, but it published 45% of theexperiments on pedagogy. These experimentsexplored a variety of issues on the process andproduct of student writing. An early experimentfound that inexperienced writers producedhigher quality writing via contextualized caseassignments than traditional assignments.However, the assignment type did not impactthe writing produced by students with previousbusiness-related experiences [40]. In a recentpilot study, engineering students used sentencecombining and pattern practice to producesignificantly higher quality reports than studentsin a control group [41].

Page 11: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

282 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

Fig. 3. Correspondence analysis of Topic and Year.

Another correlation existed between collaborationand TPC (top-right quadrant of Fig. 2). TPCpublished approximately 93% of the experiments onthis topic 13), and 85% of these experimentsfocused exclusively on virtual teams 11).Virtual teams consist of any dispersed group whouses technology to accomplish an organizationaltask. These experiments have addressed howvarious initial meeting modes and the technologicalcomplexity of a project impact virtual collaboration[42], [43].

A third correlation was found between interculturalcommunication and TC (bottom-left quadrantof Fig. 2). Overall, this topic appeared less thanexpected across the five journals (Table III), butTC published all three of the experiments. Themost recent study determined how participants’culture related to the presentation introductionthey preferred [44].

Topic and Year Our final CA showed relationshipsbetween individual topics and the five-year periods(Fig. 3). The 2-D analysis accounted for 86.7% ofthe inertia (Dim 1: 56.4%, Dim 2: 30.2%). Again,this indicates that the analysis is stable and wecan interpret the plot with some confidence. Thenumerical output is supplied in Appendix C.

The strongest correlation related to experimentson communication strategies was published from1997 to 2001 (top-right quadrant of Fig. 3). Atotal of 12 experiments were published on thistopic, 75% of which were during these five years.These experiments explored a variety of causalrelationships, including the effects of document

TABLE IVFREQUENCIES AND CONTINGENCY TABLE

ANALYSIS RESULTS OF ORIGIN

type and procedure on the quality and ease oftranslation for Spanish, Chinese, and Japanesespeakers; the effects of speaking Ebonics andStandard English in professional settings; and theeffects that exordial techniques have in gainingaudience attention [45]–[47]. All five journalspublished an experiment on communicationstrategies during these five years.

Another correlation existed between experiments oncollaboration published from 2002 to 2006 (top-leftquadrant of Fig. 3). A total of 14 experimentswere published on this topic, 57% of which werepublished during these five years. As observedearlier, the bulk of these experiments ( 6 or75%) focused on collaboration within virtual teams.Only TPC published experiments on this topic from2002 to 2006. Results from the second CA (Fig. 2)also demonstrated a correlation between TPC andexperiments on collaboration.

Origin: Finally, we examined the primary fieldthat the experimental researchers cited in theirliterature review to motivate their own study. Fieldsthat inspired these experiments did not necessarilycorrelate with the fields associated with the study’sresearchers. Results offer insight into the fields thatderive or are closely associated with experimentalresearch in technical communication. Overall, weidentified 11 different origins. Table IV lists thesefields and their frequencies.

A contingency table analysis calculated theobserved frequencies of the fields of origin acrossthe sample (Table IV). We identified no literaturethat weighted the fields that inspired experimentalresearch in technical communication, so our nullhypothesis again assumed that the fields of originwould be evenly distributed across the sample. Thisexpected frequency was determined by dividing

Page 12: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 283

the total number of experiments by the number oforigins (137/11 12.45).

Results indicated that experiments inspiredfrom business and technical communication andthe STEM disciplines, particularly psychology,appeared more than expected. Combined, theseexperiments comprised 48.2% of the total sampleand were dispersed across all five journals.

Experiments inspired from communication studies,human–computer interaction, education, andlinguistics and language behavior appeared asexpected and were not significantly distributedbecause the individual frequencies were too close tothe expected frequency of 12.45. Combined, theseexperiments comprised 39.4% of the total sampleand were dispersed across all five journals.

Finally, experiments inspired from writingstudies, business and economics, informationand knowledge management, gender studies, andmedicine appeared less than expected. Combined,these experiments comprised 12.4% of the totalsample and were dispersed within all five journals.

RQ2. What content characterizes experimentalresearch in technical communication?Specifically, how explicit are the researchquestions and hypotheses? How are theresults of pilot studies reported? What are thesample sizes and populations used? and Whatmeasures do researchers use?

Research Questions and Hypotheses:Experiments are designed to answer researchquestions or test hypotheses about relationshipsbetween variables. Overall, 80% 110) of theexperiments included explicit research hypothesesor questions while 20% 27) implied thesefeatures. We also found that 48% of the experimentsthat included implicit questions and hypotheseswere published from 1992 to 1997.

To further explore this finding, we investigated thecorrelation between Year and Question/Hypothesis.Results from a cross tabulation revealed that thesevariables were significantly associated (chi-square14.734, 0.002, Cramer’s V 0.328).

According to Rea and Parker, a value of 0.328 forCramer’s V indicates a moderate association (anyvalue over 0.4 is considered strong) [48]. A seriesof pairwise tests determined where the variablessignificantly differed. To control for possible type1 error, we used Holm’s sequential Bonferronimethod, which uses a more conservative alpha levelfor determining significance.

The results confirmed that experiments publishedfrom 1992 to 1996 included significantly moreimplicit research questions and hypothesesthan experiments published from 2007 to 2011(chi-square 10.439, 0.001, Cramer’s V0.398). This variable association was strong. Inaddition, experiments published from 1992 to1996 included significantly more implicit researchquestions and hypotheses than experimentspublished from 2002 to 2006 (chi-square 8.414,

0.004, Cramer’s V 0.342). This variableassociation was moderate.

Pilot Study: The inclusion of a pilot study can alsomark the quality of an experiment. Pilot studiesare preliminary studies that can test experimentalprotocols and techniques to ensure they are aseffective as possible before the main study begins. Ifmeasurement error is reduced, the reliability of themeasurement technique is increased. Overall, 27%

37) of the experiments included the resultsfrom a pilot study. We found that 65% of the pilotstudies appeared in experiments published in thelast 10 years of our timeframe 24).

To further explore this finding, we again ran a crosstabulation on the Year and Pilot Study. Variableswere significantly associated (chi-square 8.262,

0.041, Cramer’s V 0.246), and pairwise testsrevealed the significant relationships. The resultsconfirmed that experiments published from 1992 to1996 included significantly less experiments withpilot studies than experiments published from 2007to 2011 (chi-square 8.250, 0.004, Cramer’s V0.354). This variable association was moderate.

In addition, experiments published from 1992 to1996 included significantly less experiments withpilot studies than experiments published from 1997to 2001 (chi-square 4.986, 0.026, Cramer’s V0.277). This variable association was moderate.

Sample Size and Population: A sample includesa subgroup of a population. Results from a sampleare then generalized back (and used to represent)the population. For such generalization to be valid,the sample must be representative of its population.Therefore, the most important characteristic of thesample is not its size but its similarity to its parentpopulation. Most of the experiments publishedin technical communication used conveniencesamples (such as nonrandomized groups like aclass of technical writing students), and many ofthe other studies did not provide enough samplinginformation to accurately code. We found thatthe sample sizes used varied from ten subjects to3,540. Two studies did not define their sample size.

Page 13: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

284 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

TABLE VTOP STATISTICAL MEASURES REPORTED BY FREQUENCY AND

BROAD CATEGORY

Overall, 20,604 subjects were used across thesample. The distribution of the sample sizes wasgreatly skewed, and the median value of 73 wasthe best indicator of central tendency. Sample sizeis contextual and depends on the statistical power,effect size, and significance level of each individualstudy. Therefore, we do not provide any additionaldescription of this variable.

All 137 experiments involved human subjects.Seventy-three percent of the experiments usedstudents 100), 10% 14) used a sample weclassified as other, 9% 12) used practitioners,5% 7) used a mixed population, and 3% 4)used an undefined population. Subjects classifiedas other included military civilians, senior citizens,and academics [49]–[53]. Mixed samples includedcombinations of populations, including recruitersand students and professional and student writers[54], [55].

Measures: We coded every statistical measurereported in the sample. Overall, researchersreported 661 total measures and 46 different types.Table V lists the 13 most frequent measures.The first four (mean, demographic information,standard deviation, and ANOVA) accounted for 53%of the total measures reported in the sample. Thesefirst three were broadly classified as descriptive; theANOVA was broadly classified as basic inferential.The additional remaining nine measures accountedfor 90% of the total measures reported in thesample. In total, the most frequent measuresincluded six descriptive measures, six basicinferential measures, and one advanced inferentialmeasure. The coding schema we used to broadlyclassify measures was modified from [24].

The majority of experiments reported basicinferential measures like the ANOVA, -test, and

correlations ( 95, 69.3%). In addition, 16.8% ofthe studies 23) reported advanced inferentialmeasures such as the MANOVA. Finally, 13.1%of the studies 18) reported only descriptivemeasures such as mean and standard deviation.One study reported no statistics. We performeda series of tests that correlated with the broadmeasures categories to Journal, Year, Origin, andTopic; however, no significant relationships werefound.

RQ3. Who publishes experimental research intechnical communication? Specifically, whichauthors and affiliates are most associated withexperimental research? and How does thesample’s gender and authorship distributioncompare to existing research?

Author Results Overall, our sample included 309authorship attributions and 236 different authors.Approximately 84% of the authors contributedto only one experiment, and 11% of the authorscontributed to two experiments. Table VI lists the12 authors who conducted 3 experiments.

Spyridakis dominated the output of experimentalresearch in technical communication over the20-year period. She coauthored 11% 15) ofthe total sample. Her output also demonstratedbreadth; she published within all four of thefive-year periods and in three of the leadingjournals—TPC, JTWC, and TC. The majority ofthese experiments were comprehension-themed

9), but she also published on genre, technology,communication strategies, and editing and style.

Overall, we found 57% of the sample’s authors weremales 176) and 43% were females 133).In regards to collaboration, experiments includedup to six authors, and approximately 74% of thesepapers were coauthored 101) instead of singleauthored 36).

To enhance these results, we also collectedinformation on the top experimental researchers’educational background and their currentaffiliations. As shown in Table VI, these 12researchers earned their highest degree from oneof six institutions. Half of these institutions wereUS based; three researchers earned their degreeat the University of Washington while researchersfrom international institutions were evenly splitamong the University of Leiden, University ofTwente, and Utrecht University ( 2 each). Onlytwo researchers earned a degree in technicalcommunication (Ummelen and Fukuoka), and theothers earned their degree in a variety of other

Page 14: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 285

fields, including educational curriculum andinstruction, management information systems,and sociolinguistics. Currently, these experimentalresearchers are affiliated with one of nine differentinstitutions. Seven of these institutions are basedinternationally; three researchers are affiliatedwith the University of Twente, but they all work indifferent departments. Only Steehouder (Twente)is currently affiliated with a pure technicalcommunication department; however, Spyridakisand Williams (Washington) produced much of theirexperimental research as faculty in the Departmentof Technical Communication (now the Departmentof Human Centered Design and Engineering),and Gerritsen is affiliated with the Department ofBusiness Communication Studies (Radboud).

Finally, we compared authorship among the fivetechnical communication journals. A contingencytable analysis suggested a moderate associationbetween journal and collaboration (chi-square10.432, 0.034, Cramer’s V 0.276). Follow-uppairwise comparisons revealed that TPC publishedsignificantly more coauthored experiments thanJTWC ( 0.039, Cramer’s V 0.214). TPCalso published significantly more coauthoredexperiments than TCQ ( 0.007, Cramer’s V0.333); however, this result must be heavily hedgeddue to the small sample of TCQ experiments.

Affiliation Results Our sample also included 309affiliation attributions and 108 different affiliations.Table VII lists the 13 institutions that produced 5experiments.

The University of Washington 38) andUniversity of Twente 35) comprised nearly25% of the affiliation attributions. Washingtonresearchers produced 17 of the sample’sexperiments. Spyridakis contributed to 15 ofthese experiments; however, 18 other researchersfrom Washington were also represented. AllWashington experiments had multiple authors,including researchers from the Boeing Company,Fuji Xerox, and the State University of New York atBinghampton. Washington researchers publishedexperiments throughout all four of the five-yearperiods and in three leading journals—TPC, JTWC,and TC. These experiments explored six differenttopics, primarily comprehension.

Researchers at Twente produced 19 of thesample’s experiments. Sixteen different authorscontributed to these studies, including five ofthe top experimental producers identified inTable VI (van der Meij, de Jong, Steenhouder,

Gellevij, and Ummelen). Fifteen of the 19experiments were coauthored, and the Twenteresearchers also collaborated with researchersfrom Utrecht University and the Baan Company(the Netherlands). Twente researchers publishedexperiments throughout all four of the five-yearperiods in four different journals—TPC, JTWC, TC,and JBTC. These experiments explored six differenttopics, primarily genre.

In addition, 60% of the affiliations were from the USand were represented by 74 different universities,industries, or government and military agencies.Forty percent of the affiliations were internationaland represented by 34 different universities,industries, or government and military agencies.When classified by the world region, 63% of theaffiliations were from North America, includingCanada 193); 30% from Europe 94); 4%from Asia 13); 2% from Australia 7); and0.5% from both Africa and the Middle East as wellas Central and South America 1).

Finally, 92% of the affiliations came from auniversity 285), 7% from industry, and 1%from government and military agencies.

CONCLUSIONS, LIMITATIONS, AND SUGGESTIONSFOR FUTURE RESEARCH

The final section concludes with an examination ofhow this study’s results relate within the broadercontext of the field’s growing body of knowledge. Wealso note the study’s limitations and suggest areasfor future research.

Conclusions One of the more compellingresults from this study was the consistencyof the experimental output. The contingencytable analysis confirmed no significant shifts inexperimental publication over 20 years. This resultcould suggest that experimental approaches havea stable presence in the field’s journals. However,the more discouraging conclusion is that nonotable increase in output was found despite callsfrom leading scholars, tutorials on these methodstailored to the field, and the increased number ofresearchers with formalized degrees in technicalcommunication. The results of this study revealedways that the journals’ editors and researchers canstrategically improve the presence of experimentalresearch in the field as well as encourage findingson the contents of our recent experiments.

TPC emerged as a strong producer of experimentalresearch, publishing 45% of the sample. For

Page 15: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

286 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

TABLE VITOP EXPERIMENTAL RESEARCHERS ASSOCIATED WITH THE SAMPLE

TABLE VIITOP AFFILIATIONS ASSOCIATEDWITH THE EXPERIMENTAL SAMPLE

illustrative purposes, if TPC were removed frominquiry, our comprehensive sample would dropfrom 137 experiments to 76. Instead of publishingapproximately seven experiments per year, theaverage drops by almost half to 3.8. Experimentscurrently comprise 6.47% of the total refereed

papers published across the five journals withinthe 20-year period 2118); that number woulddrop to 3.56% without TPC.

Interestingly, while experiments dominatecurrent TPC content (alongside surveys andtutorials), the journal’s readers prefer casestudies, literature reviews, and tutorials [17].This reader feedback merits further investigationinto what makes technical communication lessopen to experimentation. Cumulatively, thesefindings could suggest a training issue, whichaligns with the general criticisms of our field’sresearch practices. Charney wondered if technicalcommunicators had a deep enough understandingof methods to appropriately match them to theirresearch questions [2, p.80]. She later wrote thatexperimental approaches, in particular, appeareddaunting to the field because of their unfamiliartechniques [56]. In the field’s graduate-levelresearch methods courses, Campbell found thatexperiments and statistics were often covered butthat topics related to validity and reliability were

Page 16: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 287

some of the least covered [6]. She argued that thesedisconnects in methodological training impactedthe value of technical communication research andmoved the field further away from full professionalstatus. Based on the present study’s results, weconclude that experimental research in technicalcommunication journals would suffer significantlyif TPC shifted to the readers’ preferred editorialfocus.

Results from this study also indicated severaladditional contributions that TPC has made inexperimental research: the journal correlatedwith experiments on the topic of collaboration(Fig. 2), particularly during 2002–2006 (Fig. 3),and was also found to be a more popular venuefor coauthored experiments than JTWC and TCQ.While the impact of TPC is evident, this study alsofound that three other journals contributed toexperimental research in meaningful ways.

JTWC was the leading publisher of experimentsfrom 1992 to 1996. However, the journal appearedto change its editorial focus, publishing 17experiments from 1992 to 1996 but just threeexperiments from 2007 to 2011. Fig. 1 illustratedthis seemingly deliberate shift with JTWC overthe 20-year period as well as TPC’s evolutioninto the current top producer of experimentalresearch. Though JTWC did not correlate with anyother variables, it published experiments on 8 ofthe 13 topics and published 32 of the sample’sexperiments.

TC emerged as a consistent venue for experimentsand was associated with experiments onintercultural communication (Fig. 2). Overall, TPC,JTWC, and TC were the three publication venues forthe top producer of experiments (Spyridakis) andresearchers at the top two affiliations (University ofWashington and University of Twente).

In comparison, JBTC and TCQ were weakerproducers of experimental research. Combined,both produced 13% of the sample. Neitherpublished many experiments over the 20-yearperiod; however, JBTC was associated withexperiments on pedagogy. Due to its small samplesize 4), TCQ did not correlate with anyvariables.

When identifying future publication venues,researchers should be aware of how these journalshave evolved regarding experimental research. TPCappears to be the leader for current experimentalresearch; however, researchers should also notethat TPC, JBTC, and TC were associated with

experiments on collaboration, pedagogy, andintercultural communication, respectively. Thesejournal’s editors could use this insight to enhancetheir editorial focus and better distinguish theirscholarship.

Results on Topic and Origin could also shapefuture experimental research. Experiments oncomprehension, technology, and genre appearedmore than expected (Table III), and that technicalcommunicators were mainly inspired by ideasand theories posited in business and technicalcommunication (Table IV). Knowledge from theSTEM disciplines, mainly psychology, was thesecond significant informant of experiments. Wenoted the field’s interdisciplinarity in the literaturereview. The variety of topics and disciplines thatinform technical communication experiments alsoreflects this idea; however, researchers should bemindful of how these and future studies extend orchallenge our body of knowledge. Both Charneyand Rude stressed a need for greater agreement onresearch questions, which will only define our ownidentity [2], [4]. Results from this study suggestways experimental research can respond to thiscall.

In addition to providing an overview of thequantity of experimental research in technicalcommunication journals, we reported results thatsuggest the quality of the sample. Overall, we foundthat most experiments included explicit researchhypotheses or questions; however, the presenceof these elements increased significantly over the20-year period. Pilot studies appeared in only 37%of the sample, but the inclusion of these studiesalso increased significantly over the 20 years.Both results indicate improved research designsand experimental training. Similarly, almost 70%of the sample went beyond reporting descriptivemeasures and included basic inferential measureslike the ANOVA, -test, and correlations. We foundno significant relationship among the measuresand other variables, suggesting that experimentalresearchers have remained consistent in how theyreport results. All of these findings are encouragingsigns for the quality of future experimentalresearch.

Finally, the results on the authors and affiliationsassociated with experimental research meritdiscussion. Our sample revealed that 57% of theexperiments were authored by males and 43%by females. This 14% gender gap is small whencompared to previous research. An analysis of 25years of TPC content included a 22% gender gap

Page 17: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

288 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

that favored male authors, and an analysis of fiveyears of JBTC content included a 24% gender gapthat favored female authors [25], [28]. Similarly, thetop producer of experimental research in technicalcommunication was a female, who published acrossthe entire 20-year period and on five differenttopics.

We also found that two or more authors contributedto 74% of our experimental sample. Whenconsidered alongside the available data, it appearsthat collaboration could be associated withexperimental research in technical communication.The earlier-noted TPC and JBTC analysesfound that only 37% and 22% of scholarshipincluded more than one author. Perhaps theone discouraging result on authorship was that96% of researchers contributed to only one(85%) or two (11%) experiments. Identifying whythese authors chose not to conduct second orsubsequent experiments, or at least not publishthem in technical communication journals,might improve the output of experimentation.Finally, researchers affiliated at the University ofWashington and University of Twente produced asubstantial amount of the sample. The frequencyof experimental research produced by the affiliatescould correlate to programmatic makeup.

We suggested earlier that the lack ofexperimentation in technical communicationcould correlate to training issues; however,barriers to using these methods might alsorelate to departmental support. Respondents toBlakeslee’s questionnaire frequently mentionedtheir frustrations with acquiring support andrecognition from their host departments. Onerespondent identified her department as “a verytraditional literature-based view of English studies”and wrote:

research that takes a lot of time is not wellunderstood or highly regarded I feel like Ihave to walk a very careful line between themore quantitative view of technical writingresearch and the literature-based world view.[12, p. 137]

Several other respondents expressed concernsabout the value their English colleagues attached totechnical communication research. Other studies(notably [57]) have questioned if English is a suitablehome for technical communication programs.Experiments are arguably more associated withthe hard and social sciences, and it is possiblethat a program’s host department dictates thetype of research its affiliates produce. Both

Washington and Twente appear more aligned withthe science model than technical communicationprograms housed in literature-based Englishdepartments. The graduate programs at bothuniversities are designed to produce high-qualityresearchers [58], [59]. Graduate students havethe opportunity to work in several research labs,which likely facilitates the collaborative naturereflected throughout this study’s sample; everyexperiment affiliated with Washington and mostof the experiments affiliated with Twente hadmultiple authors. We also observed that othertechnical communication programs that were nolonger housed in an English department, suchas the Department of Linguistics and TechnicalCommunication at the University of North Texasand the Department of Journalism and TechnicalCommunication at Colorado State University, weremore associated with recent experiments (such as[60] and [61]).

Limitations This study explored only experimentalresearch in technical communication journals overa 20-year period. The results cannot generalizethe quantity or quality of research of otherempirical methods, such as surveys, case studies,or usability studies. We were able to perform athorough analysis of our sample because of its size;however, the size also reduced the certainty of ourfindings, especially when some of these variableswere organized into multiple levels. Finally, themajority of the variables we coded were categorical,reducing the strength of our statistical analyses. Alimitation of contingency table analyses is that itsresults are often based on hypothesized outcomeswhen little previous research exists. And thoughcorrespondence analysis offers interesting results,the correlations cannot be considered statisticallysignificant.

Suggestions for Future Research We found thatmany of the general characteristics that maketechnical communication unique were also reflectedin the experimental sample. However, we join inthe recommendations that the field could onlybenefit from a series of universal questions thatwould provide the overlap necessary to extend orchallenge previous findings. Experimental methodsare appropriate means for facilitating this type ofinquiry.

Future experiments might benefit fromcontent-focused inventories on the specific topics oracademic disciplines that inform our research. Theexperiments on genre, for example, are primarilyfocused on instructions and how readers find and

Page 18: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 289

use information. A meta-analysis of these findingswould yield additional designs that extend thisbody of knowledge. Additional studies could testthese same phenomena in other settings withdifferent populations, or perhaps apply theseearlier designs to other genres, such as proposals,grants, or reports. Similarly, researchers shouldbe aware of the experimental topics that appearedless than expected, such as research design andgender. Revisiting this literature might suggestadditional studies or determine if an experimentis even an appropriate method for exploring thecausal relationships impacting these topics.

Researchers could take a similar approach whenchoosing the idea or theory that best informed theirexperiments. As a field, we might need to considerif we are pulling from so many other disciplinesthat the impact of our research is being diluted orindistinguishable. We might also need to considerwhich fields we should be relying on most to informthese studies. Charney cautioned from separatingtoo much from rhetoric and composition becauseits foundations helped shape many technicalcommunicators [2]. Indeed, comprehension andgenre-focused experiments were likely publishedmore than expected (Table III) because of theirsubstantial and rich histories in the compositioncommunity that predate technical communication.

Charney also noted barriers to initially conductingexperiments and Blakeslee noted overall barriers toresearch, but future investigations should explorewhy researchers choose not to continue with thesemethods. Experimental research incurs costs thatmight not be associated with other approaches,such as paying human subjects and data coders.The STC has dissolved its $10,000 researchgrant, leaving only one external funding streamthrough the Council for Programs in Technical andScientific Communication (up to $1,500). Technicalcommunication is not yet a recognized discipline

with federal funding agencies like the NationalScience Foundation, and the lack of externalfunding opportunities in our field might deter agreater volume of experimental research. However,the affiliates most associated with experimentaloutput suggest an alternate path to producingmore experiments. Additional studies on how theevolution of technical communication programsimpact the type of research produced is needed.Likewise, a follow-up study in another 10 or 20years might reveal a correlation between technicalcommunication programs that have separated fromEnglish departments and the types of researchproduced. Finally, this study only provided asnapshot of the quality of these 137 experiments.An additional study could assess the impact ofthese experiments, including how often and inwhich fields they are being cited.

Nearly all academic disciplines go through amaturation process. As technical communicationcontinues its own process, every facet ofscholarship is subject to evolve and becomerefined, including its core philosophies, theories,research questions, and methodologies. All ofthese characteristics ultimately lead to the qualityof research produced. While solely focused onexperimental research, we have offered a reliabledesign that technical communicators could use toexplore other research methods. The collection ofthese data in tandem with the results of the presentstudy will substantially inform the field’s body ofknowledge as well as outline directions for futureexperimentation.

Numerical summary of Correspondence Analysison Journal and Year, numerical summary ofCorrespondence Analysis on Topic and Journal, andnumerical summary of Correspondence Analysis onTopic and Year are shown in Appendices A, B, andC, shown as supplementary downloadable materialat http://ieeexplore.ieee.org.

APPENDIX A

NUMERICAL SUMMARY OF CORRESPONDENCEANALYSIS ON Journal AND Year

TABLE VIIIPRINCIPAL INERTIAS (EIGENVALUES)

Page 19: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

290 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

TABLE IXROW PROFILES

TABLE XCOLUMN PROFILES

APPENDIX B

NUMERICAL SUMMARY OF CORRESPONDENCEANALYSIS ON Topic AND Journal

TABLE XIPRINCIPAL INERTIAS (EIGENVALUES)

TABLE XIIROW PROFILES

TABLE XIIICOLUMN PROFILES

Page 20: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 291

Fig. 4. Dimensions 1 and 3 (left panel) and Dimensions 2 and 3 (right panel) of the meaningful correlations in the Topic andJournal Correspondence Analysis

APPENDIX C

NUMERICAL SUMMARY OF CORRESPONDENCEANALYSIS ON Topic AND Year

TABLE XIVPRINCIPAL INERTIAS (EIGENVALUES)

TABLE XVROW PROFILES

TABLE XVICOLUMN PROFILES

Page 21: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

292 IEEE TRANSACTIONS ON PROFESSIONAL COMMUNICATION, VOL. 56, NO. 4, DECEMBER 2013

ACKNOWLEDGMENTS

Both authors contributed equally to this paper.

REFERENCES[1] M. S. MacNealy, “Toward better case study research,” IEEE Trans. Prof. Commun., vol. 40, no. 3, pp. 192–196,

Sep. 1997.[2] A. M. Blakeslee and R. Spilka, “The state of research in technical communication,” Tech. Commun. Quart.,

vol. 13, no. 1, pp. 73–92, 2004.[3] R. K. Boettger and L. A. Palmer, “Quantitative content analysis: Its use in technical and professional

communication,” IEEE Trans. Prof. Commun., vol. 53, no. 4, pp. 346–357, Dec. 2010.[4] C. D. Rude, “Mapping the research questions in technical communication,” J. Bus. Tech. Commun., vol.

23, no. 2, pp. 174–215, 2009.[5] D. Charney, “Empiricism is not a four-letter word,” Coll. Comp. Commun., vol. 47, no. 4, pp. 567–593, 1996.[6] K. S. Campbell, “Research methods course work for students specializing in business and technical

communication,” J. Bus. Tech. Commun., vol. 14, no. 2, pp. 223–241, 2000.[7] D. Charney, “Experimental and quasi-experimental research,” in Research in Technical Communication, L. J.

Gurak and M. M. Lay, Eds. Westport, CT: Praeger, 2002, pp. 111–130.[8] J. M. Lauer and J. W. Asher, Composition Research: Empirical Designs. New York, USA: Oxford Univ. Press,

1988.[9] D. T. Campbell and S. Julian, Experimental and Quasi-Experimental Designs for Research. Boston, MA,

USA: Houghton Mifflin, 1963.[10] M. S. MacNealy, “Research in technical communication: A view of the past and a challenge for the future,”

Tech. Commun., vol. 39, no. 4, pp. 533–551, 1992.[11] K. T. Rainey, “Doctoral research in technical, scientific, business communication, 1989–1998,” Tech.

Commun., vol. 46, no. 4, pp. 501–531, 1999.[12] A. M. Blakeslee, “The technical communication research landscape,” J. Bus. Tech. Commun., vol. 23, no.

2, pp. 129–173, 2009.[13] J. H. Spyridakis, “Conducting research in technical communication: The application of true experimental

designs,” Tech. Commun., vol. 39, no. 4, pp. 607–624, 1992.[14] A. Eaton, P. E. Brewer, T. C. Portewig, and C. R. Davidson, “Examining editing in the workplace from the

author’s point of view: Results of an online survey,” Tech. Commun., vol. 54, no. 2, pp. 111–139, 2008.[15] R. Haswell, “NCTE/CCCC’s recent war on scholarship,” Written Commun., vol. 22, no. 2, pp. 198–223, 2005.[16] K. T. Rainey and R. S. Kelly, “Doctoral research in technical communication, 1965–1990,” Tech. Commun., vol.

39, no. 4, pp. 552–570, 1992.[17] S. Carliner, N. Coppola, H. Grady, and G. F. Hayhoe, “What does the Transactions publish? What do

Transactions’ readers want to read?,” IEEE Trans. Prof. Commun., vol. 54, no. 4, pp. 341–359, Dec. 2011.[18] A. Eaton, “Conducting research in technical editing,” in New Perspectives on Technical Editing, A. J. Murphy,

Ed. Amityville, NY, USA: Baywood, 2010, pp. 7–28.[19] N. Coppola. (2010). Call for proposals for a special issue of technical communication on ’Achieving professional

status for our field.’ [Online]. Available: http://techcomm.stc.org/call-for-proposals/[20] S. Carliner, “The three approaches to professionalization in technical communication,” Tech. Commun.,

vol. 59, no. 1, pp. 49–65, 2012.[21] N. W. Coppola, “The technical communication body of knowledge initiative: An academic-practitioner

partnership,” Tech. Commun., vol. 57, no. 1, pp. 11–25, 2010.[22] Assessing and Using Research Methods, 2013. http://stcbok.editme.com/AssessingUsingResearchMethods[23] M. M. Juzwik, S. Curcic, K. Wolbers, K. Moxley, L. Dimling, and R. Shankland, “Writing into the 21st

century: An overview of research on writing, 1999 to 2004,”Written Commun., vol. 23, no. 4, pp. 451–476, 2006.[24] T. N. Tansey, B. N. Phillips, and S. A. Zanskas, “Doctoral dissertation research in rehabilitation counseling:

2008–2010,” Rehab. Counsel. Bull., vol. 55, no. 4, pp. 232–252, 2012.[25] C. Brammer and R. Galloway, “IEEE Trans. Professional Communication: Looking to the past to discover the

present,” IEEE Trans. Prof. Commun., vol. 50, no. 4, pp. 275–279, Dec. 2007.[26] J. St. Clair Martin, B. D. Davis, and R. H. Krapels, “A comparison of the top six journals selected as top

journals for publication by business communication educators,” J. Bus. Commun., vol. 49, no. 1, pp. 3–20,2012.

[27] D. Davy and C. Valecillos, “Qualitative research in technical communication: A review of articles publishedfrom 2003 to 2007,” in Qualit. Res. Tech. Commun., J. Conklin and G. F. Hayhoe, Eds. New York: Routledge,2011.

[28] R. E. Burnett, “A farewell,” J. Bus. Tech. Commun., vol. 17, no. 1, pp. 3–8, 2003.[29] E. O. Smith, “Strength in the technical communication journals and diversity in the serials cited,” J. Bus.

Tech. Commun., vol. 14, no. 4, pp. 131–184, 2000.[30] P. B. Lowry, S. L. Humpherys, J. Malwitz, and J. Nix, “A scientometric study of the perceived quality of

business and technical communication journals,” IEEE Trans. Prof. Commun., vol. 50, no. 4, pp. 352–378,Dec. 2007.

[31] M. Greenacre, Correspondence Analysis in Practice. Boca Raton, FL, USA: Chapman & Hall, 2007.[32] M. Greenacre, Theory and Application of Correspondence Analysis. London, UK: Academic Press, 1984.

Page 22: Research Article An Overview of Experimental and Quasi ... · BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 273 classroom or a workplace. As a result,

BOETTGER AND LAM: OVERVIEW OF EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH 293

[33] R. C. Team, R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation forStat. Comput., 2012.

[34] M. Greenacre and O. Nenadic. Simple, multiple and joint correspondence analysis (R Package). [Online].Available: http://ftp.ussg.iu.edu/CRAN/

[35] D. Glynn, “Correspondence analysis: Exploring data and identifying patterns,” in Polysemy and Synonymy:Corpus Methods and Applications in Cognitive Linguistics, D. Glynn and J. Robinson, Eds. Amsterdam,the Netherlands: John Benjamins, to be published.

[36] J. H. Spyridakis and W. Fukuoka, “The effect of inductively versus deductively organized text on Americanand Japanese readers,” IEEE Trans. Prof. Commun., vol. 45, no. 2, pp. 99–114, Jun., 2002.

[37] T. R. Williams and J. H. Spyridakis, “Visual discriminability of headings in text,” IEEE Trans. Prof. Commun.,vol. 35, no. 2, pp. 64–70, Jun. 1992.

[38] N. Amare, “To slideware or not to slideware: Students’ experience with PowerPoint vs. lecture,” J. Tech.Writing Commun., vol. 36, no. 3, pp. 297–309, 2006.

[39] F. Ganier, “Observational data on practical experience and conditions of use of written instructions,” J. Tech.Writing Commun., vol. 39, no. 4, pp. 401–415, 2009.

[40] L. P. Rozumalski and M. F. Graves, “Effects of case and traditional writing assignments on writing productsand processes,” J. Bus. Tech. Commun., vol. 9, no. 1, pp. 77–102, 1995.

[41] J. Wolfe, C. Britt, and K. P. Alexander, “Teaching the IMRaD genre: Sentence combining and pattern practicerevisited,” J. Bus. Tech. Commun., vol. 25, no. 2, pp. 119–158, 2011.

[42] T. L. Roberts, P. H. Cheney, and P. D. Sweeney, “Project characteristics and group communication: Aninvestigation,” IEEE Trans. Prof. Commun., vol. 45, no. 2, pp. 84–98, Jun. 2002.

[43] H.-J. Han, S. R. Hiltz, J. Fjermestad, and Y. Wang, “Does medium matter? A comparison of initial meetingmodes for virtual teams,” IEEE Trans. Prof. Commun., vol. 54, no. 4, pp. 376–391, Dec. 2011.

[44] M. Gerritsen and E. Wannet, “Cultural differences in the appreciation of introductions of presentations,”Tech. Commun., vol. 52, no. 2, pp. 194–208, 2005.

[45] J. H. Spyridakis, H. Holmback, and S. K. Shubert, “Measuring the translatability of simplified English inprocedural documents,” IEEE Trans. Prof. Commun., vol. 40, no. 1, pp. 4–12, Mar. 1997.

[46] B. A. Andeweg, J. C. de Jong, and H. Hoeken, ““May I have your attention?”: Exordial techniques ininformative oral presentations,” Tech. Commun. Quart. , vol. 7, no. 3, pp. 271–284, 1998.

[47] K. Payne, J. Downing, and J. C. Fleming, “Speaking Ebonics in a professional content: The role ofethos/source credibility and perceived sociability of the speaker,” J. Tech. Writing Commun., vol. 30, no.4, pp. 367–383, 2000.

[48] L. M. Rea and R. A. Parker, Designing & Conducting Survey Research: A Comprehensive Guide, 3rd ed., SanFrancisco, CA, USA: Jossey-Bass, 2005.

[49] H. G. Rogers and F. W. Brown, “The impact of writing style on compliance with instructions,” J. Tech. WritingCommun., vol. 23, no. 1, pp. 53–71, 1993.

[50] K. B. Riggle, “Using the active and passive voice appropriately in on-the-job writing,” J. Tech. Writing Commun.,vol. 28, no. 1, pp. 85–117, 1998.

[51] N. Loorbach, J. Karreman, and M. Steehouder, “Adding motivational elements to an instruction manual forseniors: Effects on usability and motivation,” Tech. Commun., vol. 54, no. 3, pp. 343–358, 2007.

[52] J. Hartley, “Obtaining reprints—The effects of self-addressed return labels,” J. Tech. Writing Commun., vol. 32,no. 1, pp. 67–73, 2002.

[53] F. M. V. Horen, C. Jansen, A. Maes, and L. G. M. Noordman, “Manuals for the elderly: Which informationcannot be missed?,” J. Tech. Writing Commun., vol. 31, no. 4, pp. 415–431, 2001.

[54] D. Charney, J. Rayman, and L. Ferreira-Buckley, “How writing quality influences readers’ judgment ofresumes in business and engineering,” J. Bus. Tech. Commun., vol. 6, no. 1, pp. 38–74, 1992.

[55] T. L. Crandell, N. A. Kleid, and C. Soderston, “Empirical evaluation of concept mapping: A job performanceaid for writer,” Tech. Commun., vol. 43, no. 2, pp. 157–163, 1996.

[56] D. Charney, “Guest editor’s introduction: Prospects for research in technical and scientificcommunication—Part 2,” J. Bus. Tech. Commun., vol. 15, no. 4, pp. 409–412, 2001.

[57] M. S. MacNealy and L. B. Heaton, “Can this marriage be saved: Is an English department a good home fortechnical communication?,” J. Tech. Writing Commun., vol. 29, no. 1, pp. 41–64, 1999.

[58] J. Ramey. (2010). Department History. [Online]. Available: http://www.hcde.washington.edu/history[59] Instructional Technology. {Online]. Available: http://www.utwente.nl/gw/ist/en/[60] C. Lam, “Linguistic politeness in student-team emails: Its impact on trust between leaders and members,”

IEEE Trans. Prof. Commun., vol. 54, no. 4, pp. 360–375, Dec. 2011.[61] S. C. Hayne, C. A. P. Smith, and L. R. Vijayasarathy, “The use of pattern-communication tools and team

pattern recognition,” IEEE Trans. Prof. Commun., vol. 48, no. 4, pp. 377–390, Dec. 2005.

Ryan K. Boettger (M’13) is an assistant professor in theDepartment of Linguistics and Technical Communication,University of North Texas, Denton, TX, USA. His researchareas include curriculum development and assessment, STEMeducation, technical editing, and grant writing. He is theCo-Creator of TechCorp, a soon-to-be publicly released corpusof student technical writing.

Chris Lam is an assistant professor in the Department ofLinguistics and Technical Communication, University ofNorth Texas, Denton, TX, USA. He has taught courses inweb design, technical communication, technical editing, andtechnical manuals and procedures. His research interestsinclude computer-mediated communication in collaborativeenvironments, linguistic politeness in technical communication,and quantitative research methods in technical communicationresearch.