journal of literacy research 2006 taboada 1 35

Upload: megan-anne

Post on 06-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    1/36

    http://jlr.sagepub.com/Journal of Literacy Research

    http://jlr.sagepub.com/content/38/1/1The online version of this article can be found at:

    DOI: 10.1207/s15548430jlr3801_1

    2006 38: 1Journal of Literacy ResearchAna Taboada and John T. Guthrie

    Construction of Knowledge from Reading Information TextContributions of Student Questioning and Prior Knowledge to

    Published by:

    http://www.sagepublications.com

    On behalf of:

    Literary Research Association

    can be found at:Journal of Literacy ResearchAdditional services and information for

    http://jlr.sagepub.com/cgi/alertsEmail Alerts:

    http://jlr.sagepub.com/subscriptionsSubscriptions:

    http://www.sagepub.com/journalsReprints.navReprints:

    http://www.sagepub.com/journalsPermissions.navPermissions:

    http://jlr.sagepub.com/content/38/1/1.refs.htmlCitations:

    What is This?

    - Mar 1, 2006Version of Record>>

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/content/38/1/1http://jlr.sagepub.com/content/38/1/1http://www.sagepublications.com/http://www.nrconline.org/http://jlr.sagepub.com/cgi/alertshttp://jlr.sagepub.com/cgi/alertshttp://jlr.sagepub.com/subscriptionshttp://jlr.sagepub.com/subscriptionshttp://www.sagepub.com/journalsReprints.navhttp://www.sagepub.com/journalsReprints.navhttp://www.sagepub.com/journalsPermissions.navhttp://jlr.sagepub.com/content/38/1/1.refs.htmlhttp://jlr.sagepub.com/content/38/1/1.refs.htmlhttp://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://jlr.sagepub.com/content/38/1/1.full.pdfhttp://jlr.sagepub.com/content/38/1/1.full.pdfhttp://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://online.sagepub.com/site/sphelp/vorhelp.xhtmlhttp://jlr.sagepub.com/content/38/1/1.full.pdfhttp://jlr.sagepub.com/content/38/1/1.refs.htmlhttp://www.sagepub.com/journalsPermissions.navhttp://www.sagepub.com/journalsReprints.navhttp://jlr.sagepub.com/subscriptionshttp://jlr.sagepub.com/cgi/alertshttp://www.nrconline.org/http://www.sagepublications.com/http://jlr.sagepub.com/content/38/1/1http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    2/36

    Contributions of Student Questioningand Prior Knowledge to Construction

    of Knowledge From ReadingInformation Text

    Ana Taboada and John T. GuthrieDepartment of Human Development

    University of Maryland

    This study investigated the relationship of student-generated questions and prior

    knowledge with reading comprehension. A questioning hierarchy was developed to

    describe the extent to which student-generated questions seek different levels of con-ceptual understanding. Third- and fourth-grade students (N= 360) posed questions

    that were related to their prior knowledge and reading comprehension, measured as

    conceptual knowledge built from text. The results indicated that student questioning

    accounted for a significant amount of variance in students reading comprehension,

    after accounting for the contribution of prior knowledge. Furthermore, low- and

    high-level questions were differentially associated with low and high levels of con-

    ceptual knowledge gained from text, showing a clear alignment between questioning

    levels and reading comprehension levels.

    An active learner has been described as inquisitive and curioussomeone who

    asks a substantial number of questions (Graesser, McMahen, & Johnson, 1994).

    Students who compose and answer their own questions are perceived as playing an

    active, initiating role in the learning process (Collins, Brown, & Newman, 1990;

    King, 1994; Palincsar & Brown, 1984;Singer, 1978). They seek information that is

    related to an existing knowledge structure (Olson, Duffy, & Mack, 1985). Student

    questioning, defined as self-generated requests for information within a topic or

    domain, relies on assessing what is known and what is unknown about a topic and

    attempting to expand existing knowledgeof the topic (Taboada & Guthrie, 2004).

    JOURNAL OF LITERACY RESEARCH, 38(1), 135Copyright 2006, Lawrence Erlbaum Associates, Inc.

    Correspondence should be addressed to Ana Taboada, University of Maryland, 3304 Benjamin

    Building, College Park, MD 20742. E-mail: [email protected]

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    3/36

    In reading, student questioning is represented as a strategy that helps foster ac-

    tive comprehension (e.g., National Reading Panel, 2000; Singer, 1978). The sig-

    nificance of student questioning during reading was underscored in a call for theimprovement of comprehension tests: We might wish for more extended pas-

    sages, more complex interpretive questions, and certainly, opportunities for stu-

    dents to formulate questions about what they read instead of just selecting answers

    to a test-makers questions (Resnick & Klopfer, 1989, pp. 208209).

    Student Questioning in Relation to Text

    Instruction in generating questions in relation to both expository and narrative

    texts has been shown to positively influence reading comprehension for elemen-tary school, middle school, high school, and college students (Ezell, Kohler,

    Jarzynka, & Strain, 1992; King & Rosenshine, 1993; Nolte & Singer, 1985; Ra-

    phael & Pearson, 1985; Scardamalia & Bereiter, 1992; Singer & Donlan, 1982;

    Taylor & Frye, 1992). The instructional effect has been evident in students accu-

    racy in answering test questions, better free recall of text, and identification of

    main ideas (Rosenshine, Meister, & Chapman, 1996). However, a limitation of

    many of these studies is that the authors have not attempted to provide evidence

    that the processes of question asking were the source of improvement in compre-

    hension, nor has a theoretical explanation for the effects of questioning instructionbeen provided. For example, it is possible that instruction on questioning increased

    students activation of their background knowledge and that such activation ac-

    counted for the positive effects of the instruction. In other words, the attribution of

    the instructional effects to questioning has not been shown empirically, and a theo-

    retical explanation of the benefits of questioning instruction has not been formu-

    lated in detail.

    The evidence for questioning instruction in relation to narrative texts is exten-

    sive in terms of the types of questions students ask and the impact these questions

    have on different comprehension measures. For instance, third graders wholearned to ask literal questions in relation to short stories showed significant gains

    in answering and generating questions in criterion and standardized reading com-

    prehension tests as compared to students who did not learn to generate story-based

    questions (Cohen, 1983). Older students, who learned to ask story-specific ques-

    tions by usingelements of story structure (e.g., Who is the leading character?), also

    scored significantly higher on tests assessing knowledge of story structure as com-

    pared to students who answered teacher-posed questions (Nolte & Singer, 1985;

    Singer & Donlan, 1982). Furthermore, third-grade students have learned to formu-

    late their own questions by distinguishing between the text to which the questionreferred and the knowledge base of the reader (Ezell et al., 1992). These students

    showed gains of 2.2 years (grade-equivalent score) on the California Achievement

    Test when compared to third graders who did not receive questioning instruction

    2 TABOADA AND GUTHRIE

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    4/36

    (Ezell et al., 1992). However, these results may be confounded by the fact that stu-

    dents who received questioning instruction had also been exposed to a rich, narra-

    tive reading curriculum with a large number of supplemental stories and werecompared to students who did not have the same curriculum.

    A meta-analysis of instructional studies (Rosenshine et al., 1996) revealed

    that the impact of questioning instruction yielded larger effect sizes for experi-

    menter-based comprehension tests (effect size [ES] = .87) than for standardized

    tests (ES = .36). These effects were observed when students asked specific ques-

    tions using, mainly, three types of question prompts: (a) signal words (e.g., who,

    where, how, why), (b) generic question stems (e.g., How are X and Y alike? How

    is X related to Y?) for expository texts, and (c) story grammar categories (e.g., a

    main characters goals) for narrative texts.Despite the evidence that instruction in questioning in relation to narrative

    texts has a positive impact on the comprehension of those texts, the literature has

    not fully addressed that impact from a theoretical viewpoint. A similar scenario

    occurs in the case of questioning in relation to expository texts. For example,

    third-grade students who asked two literal-text types of questionsdefinition of

    terms and clarification questions (MacGregor, 1988)did not differ in vocabu-

    lary and reading comprehension from students who asked mainly one of the two

    question types. It is possible that, to have an impact on reading comprehension,

    students need to learn to ask questions that go beyond the literal level of termdefinitions and require integration of information between the text and the

    readers prior knowledge.

    In fact, when sixth graders learned to differentiate between literal and infer-

    ential questions in relation to expository passages, they were better at answering

    and asking questions than students who engaged only in question practice or

    who did not ask any questions (Davey & McBride, 1986). Similarly, sixth grad-

    ers who were taught to formulate questions on the main ideas of expository para-

    graphs (Dreher & Gambrell, 1985) performed better in answering main idea

    questions for new paragraphs than students who interacted with text through dif-ferent activities.

    In summary, studies have indicated that a wide age range of students can learn

    to generate questions in relation to text (Cohen, 1983; Dreher & Gambrell, 1985;

    Ezell et al., 1992; Nolte & Singer, 1985; Palincsar & Brown, 1984; Rosenshine et

    al., 1996; Singer & Donlan, 1982) and that this questioning instruction fosters

    reading comprehension on both experimenter-designed and standardized tests

    (e.g., National Reading Panel, 2000; Rosenshine et al., 1996). Occasionally, re-

    searchers have discussed possible explanations for the impact of question genera-

    tion on reading comprehension. For example, with regard to expository texts, it hasbeen assumed that higher order inferential questions induce more thorough pro-

    cessing of text and enhance attention to the macrostructure of text (Davey &

    McBride, 1986), whereas for narrative texts, story-based questions were believed

    STUDENT QUESTIONING 3

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    5/36

    to aid in the organization of story events (Singer & Dolan, 1982). However, evi-

    dence has not been presented to address these possibilities.

    Despite the evidence that students who ask questions improve their understand-ing or their reading comprehension of a topic, researchers have not attempted to

    account for why instruction in questioning improves their reading comprehension

    of a text. Theoretical explanations for the impact of questioning instruction on stu-

    dents reading comprehension have been scarce, but at least three possibilities ex-

    ist, and we discuss them next.

    Influence of Questioning on Reading Comprehension

    Processes

    Among the factors that can explain the relationship between questioning and read-

    ing comprehension, three have been discussed in previous literature: (a) active text

    processing, (b) knowledge use, and (c) attentional focus. According to some au-

    thors (e.g., Davey & McBride, 1986; Singer & Dolan, 1982), it is possible that the

    generation of questions improves reading comprehension as a result of active text

    processing (Wittrock, 1981). When asking questions, students are involved in mul-

    tiple processes requiring deeper interaction with text. During questioning, students

    ponder relationships among different aspects of the text. They hypothesize, focus

    on details and main ideas, use attention selectively on different text sections (vanden Broek, Tzeng, Risden, Trabasso, & Basche, 2001), and possibly anticipate

    conclusions about information in the text. Questions may contribute to reading

    comprehension mostly because they initiate cognitive processes.

    A second explanation for the association between questions and reading com-

    prehension is the influence of prior knowledge on students questions. In particu-

    lar, prior knowledge may play a very specific role in the types of questions a stu-

    dent asks. College students, with little prior knowledge in a knowledge domain, do

    not ask many questions on materials that are too difficult or that exceed the extent

    of their knowledge base in the domain. Experts, however, tend to ask more ques-tions on difficult materials than they do for easier, less conceptual materials in that

    domain (Miyake & Norman, 1979). These data support the notion that some type

    of relationship exists between the extent of the questioners prior knowledge and

    the number of questions asked. A plausible explanation for this relationship is that

    questions activate prior knowledge, which, in turn, aids in readingcomprehension.

    A third possibility is that the impact of questioning on reading comprehension

    is explained by attentional factors. By asking questions related to a specific topic,

    the questioner directs his or her attention to text sections that contain information

    necessary to provide appropriate answers. This process has been termed the selec-tive attention hypothesis, where questions lead to a focusing of attention on text

    segments containing information from the category that the questions are about

    (Reynolds & Anderson, 1982, p. 624). College students retained more knowledge

    4 TABOADA AND GUTHRIE

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    6/36

    from text information that was relevant to questions than they retained from text in-

    formation irrelevant to questions. This evidence supports the notion that readers

    selectively allocate more attention to question-relevant information and learn thisinformation better (Reynolds & Anderson, 1982). Van den Broek et al. (2001)

    described specific attention perspective (p. 522) in relation to narrative texts.

    Under this perspective, readers comprehension and memory would improve only

    for the story sections that were targeted by the questions asked. A general atten-

    tion perspective (p. 522), for which questioning results in improved comprehen-

    sion of the whole text, was also proposed. Under the general attentional focus,

    readers are motivated to give thorough answers that require integration of informa-

    tion across the story; thus, they focus on understanding the text as a whole (van den

    Broek et al., 2001). All three explanations are feasible reasons for the associationbetween questioning and reading comprehension. However, few of these reasons

    have been empirically investigated in past research.

    Questioning and the Conceptual Level Hypothesis

    We propose a fourth plausible explanation for the contribution of questioning to

    reading comprehension: that the conceptual levels of questions enable students to

    build knowledge structures from text. When the text is expository or informational,reading comprehension can be characterized by the conceptual knowledge con-

    structed from text (Alao & Guthrie, 1999; Guthrie & Scafiddi, 2004). Conceptual

    knowledge consists of content information that can be structurally organized

    within a knowledge domain or a particular topic in that domain. Central to this

    structural organization are the interrelationships among the main concepts in the

    knowledge domain (e.g., Alao & Guthrie, 1999; Champagne, Klopfer, Desena, &

    Squires, 1981; Chi, de Leeuw, & Chiu, 1994; Guthrie & Scafiddi, 2004). Student

    questions, then, may support expository text comprehension to the extent that they

    support building a conceptual knowledge structure that includes the main conceptsand essential relationships among the concepts in the text (Taboada & Guthrie,

    2004).

    Most theories of comprehension view successful understanding of a text as the

    identification of the elements in the text and the relationships among those ele-

    ments to form a coherent structure, a mental representation of the text (e.g.,

    Graesser & Clark, 1985; Kintsch, 1998; Trabasso, Secco, & van den Broek, 1984;

    van den Broek & Kremer, 2000). Students questions may enhance reading com-

    prehension by creating a preliminary structure for the different elements and rela-

    tionships of a text representation. Questions may benefit comprehension of narra-tive texts to the extent that they support the text representation of a causal network

    (van den Broek et al., 2001). Similarly, questions may increase expository reading

    comprehension to the extent that they support the conceptual knowledge structure

    STUDENT QUESTIONING 5

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    7/36

    of the text (Taboada & Guthrie, 2004). We call this process the conceptual level

    hypothesis.

    To investigate the hypothesis that questions increase comprehension by creat-ing a preliminary expectation for the conceptual knowledge structure of the text, it

    is necessary to build a framework that characterizes the structural qualities of ques-

    tions. In the past, this has been done by describing types of questions. The majority

    of previous studies have proposed binary levels of question types, such as literal

    and inferential (e.g., Cohen, 1983; Davey & McBride, 1986; Ezell et al., 1992),

    definitional versus clarification (MacGregor, 1988), main idea questions versus

    detail questions (Dreher & Gambrell, 1985; Palincsar & Brown, 1984), and so on.

    A few studies have described question hierarchies, which categorize questions

    along a continuum of types or levels. In some of these hierarchies, higher levelquestions tend to subsume lower level ones, with higher level questions being

    more inclusive in their requests for information than lower levels. For example,

    Cuccio-Schirripa and Steiner (2000) described a four-level question hierarchy in

    which low-level questions required yes/no or factual answers, whereas high-level

    questions required causeeffect explanations of science phenomena. High-level

    questions have also been described as eliciting responses such as explanations of

    concepts, relationships, inferences, and application of information to new situa-

    tions (King & Rosenshine, 1993); requesting causal explanations (Costa, Caldeira,

    Gallastegui, & Otero, 2000; Graesser, Langston, & Bagget, 1993); and requestingthe integration of complex information from multiple sources (Scardamalia &

    Bereiter, 1992).

    We suggest that, to understand the association between questioning and reading

    comprehension, it is necessary to construct types or levels of questions that allow

    examining questioning as a variable. If students questions in relation to text are

    examined in terms of the characteristics of their requests for information, the con-

    ceptual complexity of these questions can be described. When questions are cate-

    gorized in terms of the conceptual complexity of the information requested to an-

    swer them, they can then be related to reading comprehension.

    Conceptual Questions, Prior Knowledge, and Reading

    Comprehension

    Our view of the roles of questioning and prior knowledge in reading comprehen-

    sion is based on Kintschs (1998) theory of the constructive-integration process. In

    that view, prior knowledge is used by the reader in conjunction with the text base to

    construct a situation model that fuses the two. The situation model is new knowl-edge gained from text. The more prior knowledge possessed by the reader, the

    fuller the situation model can be constructed. In this process, prior knowledge con-

    tributes declarative information (content) to which the text base can be connected.

    6 TABOADA AND GUTHRIE

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    8/36

    If the reader poses conceptual questions prior to reading, the reader brings a

    new cognitive process to the constructive reading task. Conceptual questions en-

    able the reader to connect the readers prior knowledge to the text base more easilyfor several reasons. First, the questions anticipate a possible macrostructure of the

    situation model. The reader with high-level questions preconstructs a framework

    into which the text base can be integrated. Not only does this reader have the con-

    tent for a new situation model based on his or her prior knowledge, but the reader

    has established part of thestructure of thesituation model before reading by posing

    questions.

    Second, questioning is likely to facilitate the construction of a full situation

    model by constructing a high standard of coherence for understanding. That is, a

    reader who asks highly conceptual questions expects a large number of linksamong propositions. This expectation leads the reader to construct a relatively

    large number of causal relationships among words, concepts, and propositions that

    enable the situation model to be rich, multilayered, and memorable. A reader with

    low-level questions does not anticipate an elaborate macrostructure, but may only

    anticipate a list of factual information, which does not facilitate the interconnec-

    tions that foster reading comprehension.

    Questioning in Ecological Science

    In this study, we examined the association of question levels with reading compre-

    hension, as characterized by conceptual knowledge built from expository science

    texts. However, to understand these relationships, any other content domain, such

    as geography or history, can be used. We had three reasons for choosing ecological

    science texts. First, conceptual knowledge structures are often represented in short

    amounts of text in this domain, thus minimizing the total volume of reading for

    young students. Second, concepts are readily identifiable in ecological science

    texts, facilitating the differentiation of students new constructed knowledge fromprior knowledge. Third, science texts derived from trade books often have topo-

    graphical markers, such as headings, captions, indentation, and so on, that afford

    the construction of conceptual knowledge more readily than other genres.

    Specifically, we hypothesized that levels of student self-generated questions

    in the content domain of ecology would be associated with degrees of concep-

    tual knowledge built from texts in that domain. Students self-generated ques-

    tions were categorized according to requests for factual information, simple de-

    scriptions, complex explanations, or patterns of relationships among ideas or

    concepts (see Appendix A for a description of the questioning hierarchy). Thestructure of this questioning hierarchy varies as a function of the complexity of

    the knowledge the question elicits. A description of each level is included in the

    Method section.

    STUDENT QUESTIONING 7

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    9/36

    Conceptual Knowledge in Ecological Science

    Conceptual knowledge for ecological science in this study was categorized into

    degrees or levels of knowledge built from text. The six-level hierarchy used in this

    study was constructed by using students statements of their knowledge about

    ecology (Guthrie & Scafiddi, 2004). This hierarchy is comparable to the rubric

    constructed by Chi et al. (1994), which represented conceptual knowledge of the

    circulatory system. Like Chi et al.s categorization, the higher levels in this hierar-

    chy represent levels of conceptual knowledge characterized by qualitative and

    quantitative shifts with respect to lower knowledge levels (see Appendix B for a

    description of the knowledge hierarchy). For instance, qualitative changes are evi-

    dent in knowledge statements that represent a few major concepts from the textwith supporting facts, as opposed to statements containing facts only. Higher com-

    plexity is also noticeable in knowledge statements in which concepts are coher-

    ently organized and related to each other, rather than explained in isolation from

    each other. In addition, qualitative shifts reflect that more elaborate and higher

    knowledge statements do not necessarily include more propositions but rather re-

    quire a substantive integration of information (Guthrie & Scafiddi, 2004). Similar

    to Chi et al.s knowledge hierarchy, higher knowledge in this hierarchy is repre-

    sented by explanations of the essential relationships among concepts in the do-

    main, supported by subordinate information (e.g., facts) in a structured network ofknowledge.

    Questions as Contributors to Knowledge Building

    If student questioning is to be related to reading comprehension, measured as

    conceptual knowledge built from text, the relevant question is: How do different

    question levels contribute to knowledge? Or, more precisely, how does the stu-

    dent asking a higher level question (e.g., Level 4) differ from the student askinga lower level question (e.g., Level 2)? In our theoretical perspective, a student

    who asks a Level 4 question has understood and managed individual concepts

    and can focus on a higher organizational level, which entails relationships

    among concepts. What is presupposed by a higher level question is the ability to

    anticipate a knowledge structure that includes conceptual relations. For example,

    a Level 4 question would be How do tadpoles develop lungs when they become

    toads, and how do these help them in adjusting to their habitats? A student ask-

    ing a question such as this is seeking information on (a) the concept of respira-

    tion by asking about toads lungs, (b) specific animals features that will contrasttoads and tadpoles, and (c) the concept of adaptation to habitat (explicitly stated

    in the question). This last piece of the question captures the request for an an-

    swer that connects both concepts.

    8 TABOADA AND GUTHRIE

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    10/36

    The three components of this question reveal the complexity of the knowledge

    necessary to answer the question. Essentially, the student asking a Level 4 question

    forecasts that the type of information the text contains will be comprehensive andwill provide an explanation that relates these ecological concepts. In summary, our

    focus on student questioning has to do with the organization of information in the

    questioners mind, with the knowledge that the reader/questioner brings to the text,

    and how this is expressed through questions.

    Hypotheses

    Three hypotheses are proposed in this study:

    1. Students question levels on a questioning hierarchy will be positively asso-

    ciated with students levels of reading comprehension measured by a multiple text

    comprehension task.

    2. Studentsquestions will account for a significant amount of variance in read-

    ing comprehension, measured by a multiple text comprehension task when the

    contribution of prior knowledge to reading comprehension is accounted for.

    3. Students questions at the lowest levels of the questioning hierarchy (Level

    1) will be associated with reading comprehension in the form of factual knowledge

    and simple associations. Studentsquestions at higher levels in the questioning hi-

    erarchy (Levels 2, 3, and 4) will be associated with reading comprehension con-

    sisting of conceptual knowledge supported by factual evidence.

    METHOD

    Participants

    This study included 360 students from Grades 3 and 4. The 125 third-grade stu-dents and 235 fourth-grade students were from four schools in a small city in a

    mid-Atlantic state. Students participated with parental permission. Eighty-one

    percent of Grade 4 students in the sample were returning students and had been at

    the same schools in Grade 3; 19% were newly enrolled. Demographic characteris-

    tics of the sample are included in Table 1. On the indicator of social economic sta-

    tus (SES), the sample had approximately 20% of students qualifying for free and

    reduced-price meals, whereas the district has 13%, showing comparability be-

    tween the sample and the district population. Both third- and fourth-grade class-

    rooms in all schools were self-contained, with the teacher providing the instructionfor approximately 25 children. The students reading achievement was indicated

    by the GatesMacGinitie Reading Test mean grade equivalent score (M= 4.08, SD

    = 1.78 for Grade 3, andM= 5.34, SD = 2.72 for Grade 4).

    STUDENT QUESTIONING 9

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    11/36

    Materials

    A multiple text packet containing topics on two specific biomes within the field of

    ecology was the core text for three of the administered tasks. Texts in this packet

    simulated a variety of information texts in ecology and were extracted from multi-ple published trade books on Reading Levels 2 to 5 in the domain of ecology. Texts

    were relevant to the school district science requirements. Each packet consisted of

    one of three alternative forms: Oceans and Forests (Form A), Ponds and Deserts

    (Form B), and Rivers and Grasslands (Form C). The three alternative forms were

    parallel in content difficulty and text structure. Students received alternative forms

    of the packet in both years. Each packet comprised approximately 75 pages and a

    total of 22 chapter-like sections. Each section was three to four pages long. Sixteen

    of these sections were relevant to the packet biome, and six sections were

    nonrelevant (i.e., distracters). Biome and animal/plant life information was em-phasized equally across sections. Distribution of sections was the same across all

    three forms (i.e., equal number of sections on plants, animals, and biome charac-

    teristics). Each packet had a glossary and an index.

    Text difficulty was equally distributed throughout the packet. Eight sections

    were easy text, and eight sections were more difficult text. Text difficulty varied

    mainly in terms of sentence and paragraph length. Easy text had approximately

    two to four sentences (313 words in length) per paragraph and five to six para-

    graphs per section. Difficult text had longer sentences (1428 words per sentence),

    with an average of 6 to 10 sentences per paragraph, and 13 to 16 paragraphs persection. Font size was generally bigger for the easy text than for difficult text, and

    the ratio of illustrations to paragraphs was similar for both text types, with approxi-

    mately one or two illustrations per paragraph. In addition, difficult texts had twice

    10 TABOADA AND GUTHRIE

    TABLE 1

    Demographic Characteristics of Students in Grades 3 and 4

    Grade 3 Grade 4 District

    Characteristic n % n % %

    Gender

    Male 57 45.6 118 50.2 50

    Female 68 54.4 117 49.8 50

    Total 125 100.0 235 100.0 100

    Ethnicity

    African American 32 25.8 48 20.7 8

    Asian 7 5.6 9 3.9 2

    Caucasian 69 55.6 147 63.4 87Hispanic 4 3.2 17 7.3 2

    Other 12 9.7 11 4.7 1

    Total 124 100.0 232 100.0 100

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    12/36

    as many captions (per illustration) as easy texts. According to teachers ratings,

    40% of texts were appropriate for a Grade 3 reading level and 60% were appropri-

    ate for a Grade 5 reading level.Packets had an average of two to three illustrations per page, with approxi-

    mately 100 pictures in black and white and 11 pictures in color. The pictures in

    these texts generally illustrated a concept in the text (e.g., reproduction) or de-

    picted factual and detailed text information (e.g., number and size of water lilies in

    a river). The majority of these illustrations had accompanying captions explaining

    the major features depicted. Most illustrations were real-life photographs; the oth-

    ers were diagrams with captions explaining their components.

    Due to the specificity of the content domain of the text materials used in this

    study (e.g., ecological science), the results are limited to expository texts in eco-logical science. Therefore, generalizability of the results is limited to this content

    domain and this genre.

    Measures

    A total of four tasks were administered to students in Grades 3 and 4 over three

    school days: prior knowledge, questioning, and multiple text comprehension, as

    well as the comprehension subtest of the GatesMacGinitie Reading Test (Form

    S). The GatesMacGinitie, a standardized measure of reading comprehension,

    was used to provide a measure of concurrent validity for the multiple text compre-

    hension task. All measures used in the analyses for Grade 3 were administered in

    September and December 2002. Measures used in the analyses for Grade 4 were

    administered in September and December 2003.

    Prior knowledge. Prior knowledge activation consists of students recall of

    what they know about the topic of a text before and during reading for the purpose

    of learning the content as fully as possible and linking new content to prior under-standing. In this study, this task measured the breadth and depth of students prior

    knowledge on an assigned topic in ecology. Students were randomly assigned to

    one of the three alternative forms of the task: Oceans and Forests (Form A), Ponds

    and Deserts (Form B), and Rivers and Grasslands (Form C). Students wrote what

    they knew about their assigned biomes for 20 minutes. Five minutes were devoted

    to directions. This task measured prior knowledge about the topic before students

    read about it in the multiple text comprehension task.

    Students were prompted to activate their prior knowledge by recalling what

    they knew about the topics described in the multiple text packet. Prompts for priorknowledge activation consisted of five questions that focused on similarities and

    differences between the two biomes described in the reading packet. The direc-

    tions read:

    STUDENT QUESTIONING 11

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    13/36

    In the space below, write what you know about [ponds and deserts]. When

    writing your answer, think about the following questions. How are [ponds

    and deserts] different? What animals and plants live in a [pond]? What ani-mals and plants live in a [desert]? How do these animals and plants live?

    How do the plants and animals help each other live? Write what you know.

    Write in complete sentences. You have 15 minutes to write your answer.

    After 7 minutes, the teacher provided the following prompt: You are doing

    well. Keep writing if you can. You can turn over the page if you need more room.

    After 15 minutes, forms were collected. Students responses to the prior knowl-

    edge task consisted of written essays. The following is an example of a third

    graders prior knowledge essay on the topic of Ponds and Deserts:

    Deserts are very dry. Ponds are very wet. Deserts and ponds are opposites. At

    a desert animals dont need a lot of water. They do eat but dont drink as

    much. There are lots of plants that are in the desert. For example, there are

    cactuses, and flowers and much, much more. Ponds have lots of animals. For

    example, there are ducks, and fish. There are lots of plants like lily pads that

    frogs jump on and reeds that ducks lay their eggs. Deserts have animals like

    coyotes, rabbits, snakes, birds, owls and lizards (reptiles). There are many

    other things about deserts and ponds. Well thats all I have to say aboutdeserts and ponds.

    Parallel form across time reliability for this task was r(118) = .44, p < .001

    for Grade 3, and r(151) = .31,p < .001 for Grade 4. Parallel form across time re-

    liability was established by correlating students scores on one of three forms of

    the prior knowledge task in September with scores on an alternative form of the

    task in December for each grade. Exact interrater agreement for 20 responses for

    this task in Grade 3 was 80%; adjacent was 100%. Exact interrater agreement

    for 20 responses for this task in Grade 4 was 77%; adjacent was 100%. The pro-cedure for establishing interrater reliability was very similar for all three tasks

    for which interrater reliability was indicated. Two independent raters coded stu-

    dents responses into the corresponding hierarchy for the task. Exact agreement

    was computed to report whether raters concurred on the identical number (cod-

    ing) for a given response. Adjacent agreement was computed to report whether

    raters disagreed by one or less on the coding of a response. If exact agreement

    was below 70%, discrepancies in final scores were resolved by a third independ-

    ent rater. Concurrent validity for this measure was indicated by the correlation

    between prior knowledge and multiple text reading comprehension using thethree alternative forms for both of these tasks in December 2002 for Grade 3 and

    December 2003 for Grade 4. These correlations were r(116) = .45,p < .001 for

    Grade 3, and r(159) = .35, p < .001 for Grade 4.

    12 TABOADA AND GUTHRIE

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    14/36

    Students performance on prior knowledge was rated on the same knowledge

    hierarchy as the multiple text comprehension task. The hierarchy scores ranged

    from one to six. A score of 1 (Level 1) corresponds to low prior knowledge and isevident in essays consisting of briefly stated simple facts. A score of 6 (Level 6)

    corresponds to high prior knowledge and is evident in essays in which students de-

    scribe complex patterns of relationships among several organisms and their habi-

    tats. These types of essays are characterized by concepts and science principles

    that are thoroughly supported by appropriate examples and statements. The essay

    example previously presented for this task corresponds to a Level 2 in this hierar-

    chy. At this level, students can correctly classify several organisms, often in lists,

    with limited definitions. These classifications are present in the preceding example

    (see Appendix B).

    Questioning. Questioning refers to students asking or writing self-initiated

    questions about the content of the text before or during reading to help them under-

    stand the text and topic. In this task, students generated questions about life in two

    biomes that were described in the multiple text packet. Students were given direc-

    tions to browse the text for 2 minutes: Look at your packets for a few minutes to

    remind yourself of the important ideas you have been learning about [ponds and

    deserts]. After browsing, students received the following directions:

    You have been learning about [ponds and deserts]. What questions do you

    have about [ponds and deserts]? These questions should be important and

    they should help you learn more about [ponds and deserts]. You should write

    as many good questions as you can. You have 20 minutes.

    Packets were collected before students started generating their questions so texts

    were not available to students during question generation. Students were provided

    enough space on the forms to write a maximum of 10 questions. Very few students

    wrote more than 10 questions. These questions were neither coded nor used fordata analyses. A large majority of the students completed the task in 20 minutes.

    We do not believe the questioning task was affected negatively or positively by the

    prior knowledge activation task.

    Coding students questions: Developing a questioning hierarchy. Stu-

    dents questions were coded into the four levels of the questioning hierarchy pre-

    sented in Appendix A. The hierarchy is a valuable tool because it characterizes a

    wide range of question levels in a qualitative and quantitative way. Qualitatively,

    questions are described in terms of their requests for information in a way that istransparent for multiple users and applicable to various knowledge domains (e.g.,

    factual versus conceptual questions can be described in geography, as well as in

    history). In addition, questions are also quantifiable because levels are ascribed

    STUDENT QUESTIONING 13

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    15/36

    values that correspond with objective characteristics of a question, allowing quan-

    titative analyses and multiple uses of the hierarchy.

    The questioning hierarchy was developed by the two authors of this study.Based on students written questions, we constructed a hierarchy characterizing

    the types of questions students asked. During a pilot phase, we started by examin-

    ing third-grade students questions at the beginning of the school year. Students

    questions were examined in two stages: (a) questions about animals, and (b) ques-

    tions about biomes. We sorted 65 questions from a sample of 25 students holisti-

    cally into six relatively lower and higher categories. We then identified the critical

    qualities of each category and discussed them. To test our prior classifications we

    sorted another set of 40 questions into the same categories. We discussed the cate-

    gories again and reduced them to four categories, based on redundant characteris-tics across the six original ones. After reasonable agreement on the four categories,

    we identified two question prototypes for each category.

    At the basic level of the hierarchy, Level 1, the questions are simple in form

    and ask for a factual proposition or a yes/no answer. At Level 2, questions re-

    quest a global statement about an ecological concept or an aspect of survival of

    an organism. The qualitative difference between questions at Level 1 and Level

    2 rests on the conceptual (rather than factual) focus that Level 2 questions have.

    A concept is an abstraction that refers to a class of objects, events, or interac-

    tions (Guthrie & Scafiddi, 2004). For example, defense is a concept because itrefers to a series of behaviors or a class of interactions that takes place for sev-

    eral organisms and species. At the same time, concepts are characterized by their

    abstractness because they are transferable from organism to organism (i.e., both

    owls and bears defend themselves and protect their young from predators, yet

    they do so using different behaviors and different features). Alternatively, paws

    cannot be characterized as an ecological concept because, although it can be re-

    lated to defense, it is limited to particular species or organisms. Therefore, a

    question such as How do owls defendthemselves from predators in the wood-

    lands? elicits a request for conceptual information that is not captured by aquestion such as How big are grizzly bears paws? The concepts used in eco-

    logical science in this study are reproduction, communication, defense, competi-

    tion, predation, feeding, locomotion, respiration, adjustment to habitat, and

    niche (see Appendix C for ecological concept definitions).

    Despite the conceptual focus of questions at Level 2, these are still global in

    their requests for information. Level 2 questions are not specific about aspects of

    the ecological concept, a feature that Level 3 questions have. A second characteris-

    tic of Level 2 questions is that they may also ask about a set of distinctions neces-

    sary to account for all the forms of species, or to distinguish a species habitat orbiome. For example, in the question What kinds of sharks are in the ocean?

    rather than a request for a mere grouping or quantification of organisms, the notion

    of class or group is evident.

    14 TABOADA AND GUTHRIE

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    16/36

    Level 3 questions are requests for elaborate explanations about a specific aspect

    of an ecological concept with accompanying evidence. The higher conceptual

    complexity in Level 3 questions is evident within the questions themselves be-cause they probe the ecological concept by using knowledge about survival or ani-

    mal characteristics. These questions show clear evidence of specific prior knowl-

    edge about an ecological concept that is contained in the question itself (e.g.,

    Why do elf owls make homes in cactuses?). Level 3 questions require informa-

    tion about ecological concepts (i.e., knowledge about the concept of adaptation to

    habitat is expressed in the previous question) by specifying a particular aspect of

    that concept (i.e., that elf owls use cacti to make their homes).

    Lastly, questions at the highest level, Level 4, aim at the interrelationships of

    ecological concepts or about interdependencies of organisms within or acrossbiomes (e.g., Why do salmon go to the sea to mate and lay eggs in the river?).

    Questions at Level 4 are differentiated from the other three levels because they

    constitute a request for principled understanding, with evidence for complex inter-

    actions among multiple concepts and possibly across biomes. At Level 4, interac-

    tions between two or more concepts are central to the requests for information.

    In summary, the progression from Level 1 to Level 4 questions is based on

    the complexity of the question as expressed in requests for knowledge, with

    Level 1 questions requesting factual knowledge and Levels 2 to 4 asking about

    conceptual knowledge with increasing degrees of specificity and complexitywithin the question.

    Students wrote from 0 to 10 questions and were given a hierarchy score of 1

    to 4 for each question, with a score of 0 if they wrote no question. A score of 0

    was also given if the question was categorized as noncodable. Noncodable ques-

    tions included statements (rather than questions), requests for semantic defini-

    tions, questions containing misconceptions in their formulation (e.g., Why is

    the forest surrounded by water?), questions including ethical or religious no-

    tions (e.g., Why did God make grasslands?), anthropomorphic questions (e.g.,

    Why are bats sad?), and nonreadable questions due either to very poor spellingor poor grammar. A students score could range from 0 to 40. The sum of the

    question levels was calculated by adding the codes assigned to the questions.

    The questioning mean was computed by dividing the sum by the number of

    questions asked. The number of questions asked included the noncodable ques-

    tions (coded as 0). The questioning mean was used in all analyses as the indica-

    tor of the average level of questions asked.

    Exact interrater agreement for coding studentsquestions to the questioning hi-

    erarchy in Grade 3 was 90%; adjacent was 100%. Exact interrater agreement for

    coding students questions to the questioning hierarchy in Grade 4 was also 90%;adjacent was 100% (100 questions for 25 students). Parallel form across time reli-

    ability coefficients were calculated for each grade. Parallel form across time reli-

    ability was r(116) = .43,p < .001 for Grade 3, and r(173) = .23,p < .003 for Grade

    STUDENT QUESTIONING 15

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    17/36

    4, indicating adequate reliability. Internal consistency reliability for this task

    yielded a Cronbachs alpha coefficient of .83 (10 items).

    Multiple text comprehension. Multiple text comprehension refers to stu-

    dentscompetence in identifying text-relevant information, reading to obtain ques-

    tion-relevant information, taking notes, and writing an open-ended statement ex-

    pressing conceptual knowledge gained from performing this task. Like the other

    two tasks, the content domain for this task was ecological science. This task was

    administered in three sessions over 2 days. On the first day, students spent approxi-

    mately 20 minutes searching for information. On the second day, students spent a

    total of approximately 40 minutes searching for information and an additional 30

    minutes writing what they had learned from the text. During the first two sessions,students spent time searching for information, reading, and taking notes about the

    two biomes described in the multiple text packets. The searching activity consisted

    of identifying text-relevant information by choosing sections that helped them ex-

    plain how animals and plants live in two biomes (e.g., ponds and deserts). As part

    of the searching activity, students were explicitly taught how to use the table of

    contents, how to select relevant sections, and how to take notes in the spaces pro-

    vided on the given forms.

    In the third session, students were asked to write about what they learned during

    their interaction with text in the two previous sessions. Prompts consisted of thesame questions posed for the prior knowledge task (e.g., How are [oceans and for-

    ests] different? What animals and plants live in a [forest]?) Students had 30 min-

    utes to express their knowledge and were prompted to write in full sentences. They

    were encouraged to keep writing after 7 minutes and again after 20 minutes into

    the task.

    Students essays were coded into the categories of the hierarchy for conceptual

    knowledge (Appendix B). The same knowledge hierarchy was used to score stu-

    dentsresponses to the prior knowledge task. Interrater agreement for 20 responses

    for Grade 4 was 100% for adjacent coding and 70% for exact coding; interrateragreement for 20 responses for Grade 4 was 95% for adjacent coding and 60% for

    exact coding. For Grade 3, discrepancies in final scores were resolved by a third in-

    dependent rater. Parallel form across time reliability was r(108) = .38,p < .001 for

    Grade 3, and r(151) = .46,p < .001 for Grade 4, indicating adequate reliability.

    Concurrent validity was indicated by correlations with the GatesMacGinitie

    Reading Test ofr(114) = .30,p < .001 for Grade 3, and r(160) = .35,p < .001 for

    Grade 4.

    An example of a third graders Level 6 essay follows:

    Grassland and rivers are different because grasslands are dry and have few

    water and rivers are a channel with water in it. Water lilys, trouts, salmon, sea

    wasp, lotuses, water weed, otters, piranhas, and platypus all live in a river.

    16 TABOADA AND GUTHRIE

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    18/36

    Elephants, cheetahs, deers, birds, rinos, grass, flowers, trees, butterflies, hy-

    enas, and puff adder all live in grassland. Animals drink, eat, and sleep to

    live, plants also drink, eat, sleep, and also need sunlight. Plants help animalsby making oxygen and when animals die they can fetalize the soil and that is

    good for plants.

    GatesMacGinitie Reading Test. The comprehension tests of Levels 3

    and 4 (Form S) of this standardized measure of reading comprehension were used

    in this study. These tests consist of approximately 12 paragraphs on varied subjects

    with a range of two to six questions on each paragraph for students to answer. The

    extended scale score was used for all statistical analyses.

    RESULTS

    The means and standard deviations for all variables are presented in Table 2, and

    the correlations are presented in Table 3. The first hypothesis was that students

    question levels on the questioning hierarchy would be positively associated with

    students level of text comprehension measured by a multiple text comprehension

    task. For both grades, this hypothesis was addressed by examining the correlations

    of questioning and multiple text comprehension. For Grade 3, questioning corre-lated with multiple text reading comprehension, r(116) = .38, p < .001. Prior

    knowledge correlated with questioning, r(125) = .31,p < .001, and prior knowl-

    STUDENT QUESTIONING 17

    TABLE 2

    Means and Standard Deviations for All Variables for Grades 3 and 4

    Cognitive Variables Grade 3 Grade 4

    Prior knowledge

    M 1.95 2.35SD 0.69 0.86

    n 128 221

    Questioning

    M 1.30 1.28

    SD 0.52 0.61

    n 125 235

    Multiple text comprehension

    M 2.44 3.29

    SD 0.98 1.22

    n 119 211

    GatesMacGinitieM 469.90 494.64

    SD 37.44 42.17

    n 164 218

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    19/36

    edge correlated with multiple text comprehension, r(116) = .45, p < .001. The

    GatesMacGinitie test correlated significantly with the multiple text reading com-

    prehension task, r(114) = .30,p < .001. For Grade 4, questioning correlated with

    multiple text reading comprehension, r(211) = .19,p < .01. Prior knowledge corre-

    lated with questioning, r(221) = .21,p < .01, and prior knowledge correlated with

    multiple text comprehension, r(204) = .40,p < .001. The GatesMacGinitie test

    correlated significantly with the multiple text reading comprehension task, r(197)

    = .34,p < .001.

    The second hypothesis of this study was that students questions would accountfor a significant amount of variance in reading comprehension, measured by a

    multiple text comprehension task when the contribution of prior knowledge to

    reading comprehension was accounted for. This was tested in multiple regression

    analyses for Grades 3 and 4. In each analysis, multiple text reading comprehension

    was the dependent variable, with prior knowledge entered first and questioning en-

    tered second as independent variables. This order of entry was intended to examine

    the contribution of student questioning when prior knowledge was statistically

    controlled. Missing data were handled with list-wise deletion.

    Results for Grade 3 (Table 4) indicated that questioning accounted for a sig-nificant amount of variance in multiple text reading comprehension and the

    GatesMacGinitie Reading Test over and above that accounted for by prior

    knowledge. After prior knowledge was accounted for, questioning explained 7%

    of the variance in multiple text reading comprehension, which was significant,

    F(1, 113) = 10.43,p < .01. The multipleR was .52, and the final beta for ques-tioning was .27 (p < .01). When the GatesMacGinitie was entered as the crite-

    rion, questioning accounted for 6% of the variance on this standardized test after

    prior knowledge was accounted for, F(1, 121) = 7.89, p < .01. The multipleR

    was .47, and the final beta for questioning was .23 (p < .01).Results for Grade 4 (Table 5) indicated that, after prior knowledge was ac-

    counted for, questioning explained 2% of the variance in multiple text comprehen-

    sion, which was significant, F(1, 201) = 3.99,p < .05. The multipleR was .42,

    18 TABOADA AND GUTHRIE

    TABLE 3

    Correlations Among Prior Knowledge, Questioning, and Reading

    Comprehension for Grades 3 and 4

    Cognitive Variables 1 2 3 4

    1. Prior knowledge .31*** .45*** .41***

    2. Questioning .21** .38*** .34***

    3. Multiple text comprehension .40*** .19** .30***

    4. GatesMacGinitie .48*** .31*** .34***

    Note. Correlations for Grade 3 are above the diagonal; those for Grade 4 are below the diagonal.

    **p < .01. ***p < .001.

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    20/36

    and the final beta for questioning was .13 (p < .05). In addition, questioning also

    accounted for 4% of the variance over and above prior knowledge when the

    GatesMacGinitie test was the criterion variable, F(1, 202) = 11.69,p < .001.The multipleR was .52, and the final beta for questioning was .21 (p < .001).

    We tested for the interaction effects of prior knowledge and questioning on mul-tiple text comprehension for each grade. Results from regression analyses showed

    that the interaction between these two variables was not significant for Grade 3,

    F(1, 112) = 1.879,p = .173, or for Grade 4, F(1, 200) = 0.959,p = .329. Figures1 and 2 show multiple text comprehension as a function of questioning levels and

    prior knowledge levels for each grade. For both grades, main effects were ob-

    served. As shown in the regression analyses, questioning improved comprehen-

    sion significantly for students with high prior knowledge (Grade 3, ES = 1.04;

    Grade 4,ES = .57) and low prior knowledge (Grade 3,ES = .45; Grade 4,ES = .20).

    Similarly, prior knowledge had benefits on comprehension for students with highquestioning levels (Grade 3,ES = .97; Grade 4,ES = .80), as well as for students

    with low questioning levels (Grade 3,ES = .51; Grade 4,ES = .35). Had there been

    an interaction between questioning and prior knowledge, these two variables

    STUDENT QUESTIONING 19

    TABLE 4

    Regression Analyses of Prior Knowledge and Questioning on Reading

    Comprehension for Grade 3 Students

    Dependent and Independent Variables R R 2 R2 F Final

    Multiple text comprehension

    Prior knowledge .45 .20 .20 28.34*** .36***

    Questioning .52 .27 .07 10.43** .27**

    GatesMacGinitie

    Prior knowledge .41 .16 .16 23.98*** .33***

    Questioning .47 .22 .06 7.89** .23**

    **p < .01. ***p < .001.

    TABLE 5

    Regression Analyses of Prior Knowledge and Questioning on Reading

    Comprehension for Grade 4 Students

    Dependent and Independent Variables R R 2 R2 F Final

    Multiple text comprehension

    Prior knowledge .40 .16 .16 38.93*** .38***

    Questioning .42 .18 .02 3.99* .13*

    GatesMacGinitie

    Prior knowledge .48 .23 .23 59.43*** .43***

    Questioning .52 .27 .04 11.69*** .21***

    *p < .05. ***p < .001.

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    21/36

    would have been dependent on each other for their impact on reading comprehen-

    sion, with one variable (e.g., questioning) making a difference at one level of the

    other variable (e.g., high prior knowledge), but not making a difference at the other

    level of that variable (e.g., low prior knowledge). The absence of an interaction, or

    the independence of these variables from each other, is evidenced by the fact that

    either one of the two variables has an impact on reading comprehension, irrespec-tive of the levels of the other variable.

    The third hypothesis was that studentsquestions at the lowest levels of theques-

    tioning hierarchy(Level 1) would be associated with readingcomprehension levels

    in the form of factual knowledge and simple associations, whereas studentsques-

    tionsathigher levels in thequestioninghierarchy (Levels 2,3, and 4) would beasso-

    ciated with reading comprehension levels consisting of factual and conceptual

    knowledge.Achi-square test forindependencewasusedtoaddress thishypothesis.

    Frequencies of high and low scores were computed for the variables of ques-

    tioning and multiple text reading comprehension for each grade. Low-level ques-tions reflected factual knowledge (defined as Level 1 in the questioning hierarchy).

    High-level questions reflected conceptual and factual knowledge (defined as

    Levels 2, 3, and 4 in the questioning hierarchy). Scores for the multiple text com-

    20 TABOADA AND GUTHRIE

    FIGURE 1 Mean proportion of multiple text comprehension scores as a function of prior

    knowledge levels and questioning levels for Grade 3 students.

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    22/36

    prehension task were also categorized into high and low levels. Scores for multiple

    text comprehension were low if they equaled 2 or below on the knowledge hierar-

    chy. Scores were high if they equaled 3 or above on the knowledge hierarchy. This

    partitioning of high and low for both variables was done to make the subgroups as

    equivalent as possible in size to enable a chi-square to be computed and to meet the

    requirement that expected frequencies in each cell should be at least 5.The chi-square tested whether question levels were independent of the levels of

    conceptual knowledge. For both grades, Tables 6 and 7 show the observed frequen-

    cies in the form of 2 2 matrices, where the rows correspond to the two categories

    of the multiple text comprehension variable and the columns correspond to the two

    categories of the questioning variable. For Grade 3, the Pearson chi-square was

    statistically significant, 2(1,N= 116) = 12.23,p < .001, which indicates that thehypothesis of independence between the two variables is rejected. It should be

    noted (see Table 6) that the majority of the students (67%) were located in the low

    questioning/low multiple text comprehension group (n = 49) and in the high ques-tioning/high multiple text comprehension group (n = 29). The higher proportion

    represented by these two groups gave the significant association between these

    variables.

    STUDENT QUESTIONING 21

    FIGURE 2 Mean proportion of multiple text comprehension scores as a function of prior

    knowledge levels and questioning levels for Grade 4 students.

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    23/36

    For Grade 4, the Pearson chi-square statistic was also statistically significant,

    2(1,N= 100) = 8.96,p < .01. Again, the higher proportion of cases was repre-sented by the cells of low questioning/low multiple text comprehension (n = 42)

    and high questioning/high multiple text comprehension (n = 24), which indicate a

    significant association between these two variables for this sample (see Table 7).

    These results support a specific alignment between questioning levels and levels of

    conceptual knowledge built from text measured by the multiple text comprehen-sion task for Grade 3 and Grade 4 students.

    The two groups of students were compared for descriptive purposes. A multi-

    variate analysis of variance determined any significant differences between the

    two age groups on the outcome variables of prior knowledge, multiple text com-

    prehension, and questioning. Results from this analysis showed significant differ-

    ences between Grades 3 and 4 on all three variables collectively. Results from a

    follow-up analysis of variance showed significant differences between the two

    groups on two of the three variables. Statistically significant differences between

    the two grades were found for prior knowledge, F(1, 303) = 19.01,p < .001, withGrade 4 (M= 2.35) higher than Grade 3 (M= 1.95). Multiple text comprehension

    was also statistically significantly different, F(1, 303) = 37.50, p < .001, with

    Grade 4 (M= 3.29) higher than Grade 3 (M= 2.44). Questioning was not statisti-

    22 TABOADA AND GUTHRIE

    TABLE 6

    Questioning Levels According to Levels of Multiple Text Comprehension

    for Grade 3 Students

    Questioning

    Multiple Text Comprehension Low High Total

    Low 49 19 68

    High 19 29 48

    Total 68 48 116

    Note. The values represent frequencies of questioning categories (high/low).

    TABLE 7

    Questioning Levels According to Levels of Multiple Text Comprehension

    for Grade 4 Students

    Questioning

    Multiple Text Comprehension Low High Total

    Low 42 19 61

    High 15 24 39

    Total 57 43 100

    Note. The values represent frequencies of questioning categories (high/low).

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    24/36

    cally significantly different across grades (M= 1.28 for Grade 4, andM= 1.30 for

    Grade 3).

    DISCUSSION

    The findings in this investigation showed that students questions were positively

    associated with their reading comprehension. This association was shown in the

    correlations between student questioning and reading comprehension for students

    in Grades 3 and 4. These findings are consistent with suggestions from previous in-

    vestigators that there is a positive relationship between students generated ques-

    tions and their reading comprehension (e.g., Davey & McBride, 1986; Ezell et al.,1992; King & Rosenshine, 1993; Rosenshine et al., 1996; Scardamalia & Bereiter,

    1992). However, this study expands previous literature because of its distinctive

    measure of student self-generated questions that allowed relating these questions

    to reading comprehension and prior knowledge. In this study, student questions

    were described as requests for conceptual knowledge from text. Categorizing

    questions on the basis of their requests for content, rather than by question form

    (e.g., question words what, when, who; question stems), is consistent with previ-

    ous suggestions in the literature: Defining categories on the basis of content of the

    information requested rather than form is consistent with theories of question an-swering in the cognitive sciences (Graesser et al., 1994, p. 209). Thus, our results

    contribute to the extant literature in student questioning by specifying a measure of

    question quality and presenting empirical evidence for the association of student

    questioning and reading comprehension.

    To investigate the relationship between student questioning and reading com-

    prehension, we examined the relationship of questioning with reading comprehen-

    sion when taking into account the influence of prior knowledge. Regression analy-

    ses showed that third and fourth graders self-generated questions contributed a

    significant amount of variance to reading comprehension in the domain of ecologywhen the contribution of prior knowledge was statistically controlled. Further-

    more, questioning still explained a significant amount of variance over and above

    prior knowledge in reading comprehension when the GatesMacGinitie was the

    dependent variable in the regression analyses for both grades. These findings indi-

    cate that the contribution of questioning to reading comprehension is not con-

    strained to the topic or content domain of the text. Rather, they show that question-

    ing, understood as a strategy that serves to seek conceptual information, is a

    process that benefits skills involved in standardized reading tests such as the

    GatesMacGinitie. Both of these findings contribute to the literature in two mainways.

    First, previous research has indicated that students who possess higher prior

    knowledge in a given domain tend to ask a higher proportion of questions or higher

    STUDENT QUESTIONING 23

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    25/36

    level questions than students who have lower prior knowledge in the domain

    (Miyake & Norman, 1979; Van der Meij, 1990). Although we observed similar

    findings, our results provide evidence showing that studentsspontaneous questiongeneration, in reference to authentic school texts, accounts for variance in reading

    comprehension above and beyond the variance accounted for by prior knowledge

    in the domain of ecological science. Furthermore, as discussed, these findings do

    not seem to be constrained to the specific domain of ecological science. Indeed,

    questioning accounted for variance in reading comprehension when this was mea-

    sured with an experimenter-designed test and with a standardized test of reading

    comprehension. This last finding verifies the unique contribution of questioning to

    reading comprehension through replication of results across different measures.

    Second, we found no evidence of an interaction between prior knowledge andquestioning for either grade. The absence of this interaction indicates that both of

    these variables had benefits for students reading comprehension independently of

    one another. As shown in Figures 1 and 2, questioning contributed to reading com-

    prehension for students with low prior knowledge, as well as for students with high

    prior knowledge in both grades. For both grades, these results appear to contradict

    the findings of Scardamalia and Bereiter (1992), who indicated that fifth and sixth

    graders tended to ask more definitional types of questions when they did not know

    enough about a topic but asked more high-level questions when they had some

    prior knowledge on the topic. Similarly, middle-school students tended to askmore questions on word definitions than high-level/causal questions when they

    had difficulty understanding the terminology in the text (Costa et al., 2000). How-

    ever, in these studies, this apparent interaction between types of questions and

    prior knowledge was not tested empirically. In this sense, our analyses permit dis-

    cussing the contributions of each of these variables to reading comprehension.

    Specifically, not only did significant regression weights indicate that prior

    knowledge and questioning contributed to reading comprehension independently

    of each other, but the absence of an interaction lent further support to their separate

    benefits on reading comprehension when levels of each variable were examined.Had the interaction between these two variables been significant for either grade,

    questioning would be dependent on prior knowledge for its contribution to reading

    comprehension. In other words, questioning would show benefits on reading com-

    prehension for students with high prior knowledge, but not for students with low

    prior knowledge. Our results do not support this notion.

    Thus, in our view, questioning contributes to comprehension in parallelcon-

    currently with prior knowledge. Questioning facilitates the use of prior knowledge

    but does not itself require prior knowledge beyond the extremely minimal level

    that any student would bring to the text. Likewise, prior knowledge does not re-quire questioning beyond a minimal level. Therefore, these two processes are par-

    allel, rather than interdependent in their action, during the meaning construction

    process that takes place during reading comprehension.

    24 TABOADA AND GUTHRIE

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    26/36

    Our third finding was that students question levels were associated with lev-

    els of reading comprehension measured as conceptual knowledge built from

    text. Specifically, questions that requested simple facts were associated withreading comprehension levels consisting of factual knowledge and simple asso-

    ciations, whereas questions requesting information about concepts were associ-

    ated with higher levels of reading comprehension consisting of conceptual

    knowledge supported by factual evidence and examples.

    The majority of the students asking Level 1 questions, as defined by the ques-

    tioning hierarchy used in this study, tended to have low levels of reading com-

    prehension, whereas the majority of the students asking conceptual questions as

    expressed in Levels 2, 3, and 4 had levels of conceptual knowledge commensu-

    rate with those levels. For example, students who asked questions such as Aresharks scary? (Level 1) tended to gain knowledge from text consisting of state-

    ments such as I know that most sharks are terrifying. Some of them are less ter-

    rifying like the carpet shark. Statements such as these denote the absence of

    ecological concepts and biome definitions and include only a few characteristics

    of a biome or an organism.

    Students who asked questions requesting a global statement about an ecological

    concept, such as What do grasslands animals eat? (Level 2), tended to gain sim-

    ple concepts from text. Such knowledge is expressed in statements like this one:

    Rivers and grasslands are different. I will tell you the difference is. I will tell

    you the animals and plants of a river and grassland. Hear are the animals and

    plants of a river salmon, hippo, crocodile, sea plants, otters, and polar bears.

    They all live by water, plants or meat. Some live by water, some dont. Hear

    are the animals and plants in a grassland lion, coyote, eagle, elephant, prairie

    dog, zebra, and orangutan. They all live by water, most of them eat meat and

    only some of them eat plants. Some of them live in trees one of them live in a

    hole some of them live on the ground.

    Knowledge built from text at this level is characterized by the identification of

    one or more biomes (e.g., rivers and grasslands), in which the information is

    minimal, factual, and may appear as a list, as in the previous statement. Yet, these

    statements are not characterized by full definitions of biomes or descriptions of or-

    ganisms adaptations to biomes. In addition, weakly stated concepts may be in-

    cluded, such as the concept of feeding in this statement.

    Students asking Level 3 questions requested an elaborated explanation about a

    specific aspect of an ecological concept. The specificity of the concept was gener-ally expressed by using prior knowledge within the question. Students who, on av-

    erage, asked questions at Level 3 had knowledge representations at Levels 3 and 4

    in the knowledge hierarchy. For example a third-grade students question at Level

    STUDENT QUESTIONING 25

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    27/36

    3 was What kinds of birds eat river animals? The following is a knowledge state-

    ment commensurate with this question level:

    One thing I know about rivers and grasslands are the animals that live there.

    Some animals that live in grasslands are grasshoppers, crickets and vultures.

    Some types of grasslands are savannahs, prairies, and plains. Prairies and

    plains have large openings and a lot of grass but very little trees. The big dif-

    ference between a river and a grassland is the main natural resource. The

    main natural resource for a river is water. The main natural resource for a

    grassland is grass. Some animals in a river are otters, hippos, and fish. It is

    not a regular type of hippo, it is called a River Hippo. Otters like to eat

    snakes. One way all plants and animals help each is for food.

    In this example, the student expressed conceptual knowledge (Level 3) by pre-

    senting conceptual, defining characteristics typical of each biome (e.g., The big

    difference between a river and a grassland is the main natural resource). The stu-

    dent also included types of grasslands with characteristics for each type, as well as

    a few correct classifications of organisms to each biome (e.g., grasshoppers,

    crickets and vultures). Survival concepts, such as feeding and interdependence

    between animals, are also briefly stated.

    Lastly, students who asked questions requesting a pattern of relationships be-tween concepts (Level 4) tended to show patterns of organized conceptual knowl-

    edge (Level 5). For instance, a question such as How do animals in the deserts get

    water and protect themselves from heat if there is not water and it doesnt rain a

    lot? (Level 4) requests information about the interaction of the organism with the

    biome. Students who were able to ask questions at this level of complexity tended

    to write essays that expressed similar complexity (essays at Levels 5 and 6), such

    as the following:

    Ponds and deserts are different, deserts have little or no freshwater and pondshavea lot of water. The animals that live in the desert are jack rabbits, snakes,

    insects, donkeys, spiders, scorpions, elf owls, road runners, and vultures.

    The plants in the desert are cactuses, flowers, trees, and bushes. The animals

    that live in a pond are fish, frogs, shrimp, great blue herons, green herons,

    tadpoles, birds, insects, spiders and raccoons. The plants that live in a pond

    are duckweed, lily-pads, algae, bushes, trees, and flowers. Animals in the

    desert rely on plants and animals for food and water. Animals in ponds rely

    on other animals. Some on water. Some on both. Animals in ponds rely on

    plants for food, oxygen, and shelter. Scorpions kill their prey using theirstinger in their tail. Jack rabbits usually feast at night. They eat desert

    grasses, prickly pears, and other plants. Insects in ponds eat algae and plants.

    Bigger insects eat small fish.

    26 TABOADA AND GUTHRIE

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    28/36

    The student who wrote this (Level 5) essay showed command of several eco-

    logical concepts such as predation, feeding, and protection, with supporting in-

    formation for each of them. The student also showed several correct classifica-tions of animals and plants to their corresponding biomes (e.g., scorpions and

    jack rabbits in deserts). In addition, comparisons across the two biomes and in-

    terdependencies between organisms were also included (e.g., Animals in the

    desert rely on plants and animals for food and water). Knowledge statements at

    these levels show higher organization by emphasizing knowledge principles that

    subsume relationships between ecological concepts and of the organisms with

    their biomes.

    We propose that the association between question levels and reading compre-

    hension levels, as described here, serve to inform the theoretical views of the con-tribution of questioning to comprehension. First, previous investigators have spec-

    ulated that the generation and answering of higher, inferential questions could be

    due to the active processing of text (Davey & McBride, 1986). In other words,

    question asking and answering mobilizes attention for learning broadly from text

    (Wittrock, 1981). However, if this view were fully accurate, then questioning of

    any form would increase comprehension. Our data suggest that it is not the

    presence or absence of questions in general, but the presence or absence of higher

    level questioning that facilitates higher comprehension. By comparing high- and

    low-level questions, we vastly reduce the explanation of the active processing hy-pothesis. If high-level conceptual questions have greater benefits for reading com-

    prehension than low-level questions, the benefit is due to questioning levels. Our

    chi-square analyses showed that lower level questions were associated with lower

    than average comprehension, and high-level questioning was associated with high

    levels of multiple text comprehension. Consequently, we doubt that questioning

    improves comprehension by increasing generalized cognitive activation.

    Another explanation found in the literature for the relationship between ques-

    tioning and reading comprehension is that attentional processes are elicited by

    asking questions. Van den Broek et al. (2001) found that questions induce a se-lective enhancement of memory because the reader focuses attention only on the

    text information needed to answer the questions. Our findings differ from van

    den Broek et al.s in two main ways. First, we investigated students self-gener-

    ated questions, whereas they studied experimenter-posed questions. Second, we

    explained cognitive characteristics of questions in general, and comprehension

    of diverse texts in which the content was broader than the questions. In other

    words, we did not attempt to examine whether the content of the questions pre-

    dicted or related to the content of knowledge built from text. Our interest fo-

    cused on the relationship between levels of questions and levels of conceptualknowledge built from text. Thus, we propose that this relationship is explained

    by a conceptual level hypothesis. In synthesis, based on our evidence and our

    measurement of questioning, we did not attempt to distinguish between the at-

    STUDENT QUESTIONING 27

    at DUQUESNE UNIV on February 2, 2012jlr.sagepub.comDownloaded from

    http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/http://jlr.sagepub.com/
  • 8/3/2019 Journal of Literacy Research 2006 Taboada 1 35

    29/36

    tention hypothesis proposed by van den Broek et al. and the conceptual level hy-

    pothesis. Therefore, we cannot rule out the possibility that students questions

    had an attentional effect of enhancing recall and/or comprehension of sections oftext that pertain solely to their questions.

    In conclusion, we suggest that students who tend to ask lower level questions

    struggle with identifying the overall hierarchical structure and the major interrela-

    tionships among the concepts within texts in a knowledge domain. Conversely,

    students who overall ask higher level, conceptual questions tend to represent

    knowledge built from text in a conceptually organized, hierarchical structure.

    Readers asking high-level, conceptual questions can anticipate and bring to the

    text an elaborate text macrostructure. Consequently, these readers would tend to

    build fuller text representations and richer situation models (Kintsch, 1998), char-acterized by a larger number of connections and relationships among the major

    concepts in the text. Our findings, then, are consistent with an attention