2.3 Studies on integrated tasks
The significance of integrated language tasks has been one of the primary underlying rationales of communicative language learning,which has also been echoed by English for Special Purposes(ESP)research indicating that reading-to-write tasks are fairly common in university context(Bridgeman&Carson 1983;Hale et al.1996).
The construct of reading-to-write tasks can be postulated from two perspectives:pedagogical and theoretical.The pedagogical perspective pertains to instructional tasks that integrate reading and writing for a variety of educational purposes,while the theoretical perspective is more relevant to the underlying abilities that subjects count on while performing these tasks.Meanwhile,according to Delaney(2008),the investigation into reading-to-write construct involves reading,writing and constructivist approaches.
2.3.1 Merits and problems of integrated writing tasks
Both test developers and users’growing interest in incorporating integrated writing tasks to assess subjects’writing abilities especially in academic contexts has been adequately promoted by their concern about authenticity,which is the major underlying rationale because this type of test task is assumed to yield high predictive validity(Lewkowicz 1997;Wesche 1987).It has been assumed that writing tasks in academic contexts often require writers’reading and synthesizing source texts(Elben 1983;Braine 1989;Carson 2001),which have adequately informed the prominence of reading in academic writing.Furthermore,writing is hardly ever done in isolation,but is usually conducted in response to source texts(Leki&Carson 1997;Weigle 2002).The incorporation of writing with other language skills,therefore,enhances authenticity by replicating real task demands from certain language use domains,such as academic writing.Cumming et al.(2005)viewed authenticity as the main advantage of including both writing-only tasks and new integrated writing tasks in the new framework for the TOEFL iBT.It was believed that integrated writing tasks can trigger off a positive washback effect(Cumming et al.2004;Weigle 2004).
Another underlying reason for the intense interest in reading-to-write tasks consists in fairness and equity(Plakans 2007).Writing-only tasks primarily require writers to count on prior knowledge stored in their long-term memory to accomplish the writing goal,but this knowledge is highly subject to different individual,educational,and cultural backgrounds(Read 1990;Weigle 2004).In reading-to-write tasks,however,source texts avail all subjects by offering exactly the same topical knowledge so that they are less likely to be disadvantaged by a lack of pertinent prior knowledge(Yang 2009),which has motivated test developers to integrate reading-to-write tasks in language assessment batteries.
Despite the advantages of reading-to-write tasks due to the concerns of authenticity and fairness,some researchers challenged their construct validity.
First,the concern of authenticity has been criticized by supporters of performance-based testing.Although test developers and users may think that authenticity make this type of tasks more appropriate for academic writing assessment,language testers warned that authenticity itself is not adequate to justify test tasks.Bachman and Palmer(1996)held that it might be impossible to recreate the real-life settings for language use in language assessment.They thought academic writing is a case in point because it usually involves learners’reading activities possibly in several weeks to digest,synthesize,and revise their writing.In an assessment context,nonetheless,this could not be possible.Chalhoub-Deville(2001)also pointed out that test task development should be supported by both theoretical and empirical research on the language construct as well as the systematic observations of the performing processes in a given context.Thus,it could be argued that though reading-to-write tasks may integrate reading component in real academic writing tasks,we still need evidence from the investigation into subjects’specific performing processes especially in terms of mental perspective.
Second,validity may be another concern for those critics of integrated language tasks.The current widely-accepted concept of validity in the realm of language testing has greatly been influenced by the work of Messick(1989),Bachman(1990),Kane et al.(1999),and Kane(2001).Validity is usually conceptualized as“an integrated evaluation judgment of the degree to which empirical evidence and theoretical rationales support adequacy and appropriateness of inferences and actions based on test scores”(Messick 1989:13).Thus,test developers and users seek validity from the interpretation and use of test scores,or the inference drawn on the basis of test scores(Zou 2004).In other words,in order to justify validity for tests,arguments must be made to support the inferences,and a test is considered valid if evidence proves the assessment of the underlying ability,or the scores as an accurate indicator of the ability being tested.
Reading-to-write tasks have been challenged because the writing construct is complicated by the integration of reading component at the same time(Charge&Taylor 1997).It turns out difficult to determine the role of reading in the observed act of writing.According to Jamieson(2003),we might ask whether this type of test measures reading,writing,both of them,or an interaction between the two.This was also informed by Hirvela(2004)who held that there is a lack of a comprehensive or definite model of ESL reading-to-write interface so that we could continue the investigation into such connection.
In summary,reading-to-write tasks have inherent potential in measuring literacy skills in academic contexts in terms of authenticity and fairness;nonetheless,criticism toward this type of task also demonstrates that studies are clearly needed to get an accurate and clear picture about how subjects perform them.In particular,we need more in-depth empirical evidence about how subjects proceed mental operations while performing reading-to-write tasks.In addition,previous studies on integrated writing tasks have overwhelmingly investigated either written products or writing processes,while few researchers have focused on how subjects utilize a repertoire of strategies when completing integrated writing tasks particularly in an EFL context.
2.3.2 Product-focused studies of integrated test tasks
The inclusion of integrated reading-to-write tasks in language tests allows test users or developers to draw inferences about subjects’abilities to generate and synthesize ideas into written products by comprehending,evaluating,and analyzing source texts.As a growing number of language assessing programs adopt integrated writing tasks,interest has gradually increased with a concern for accurate interpretation of test scores and performance on those tasks.Over the past decade,researchers in the fields of both language assessment and language teaching have concentrated on the discourse features of written products from integrated writing tasks.
Using a writing-only prompt and two reading-to-write prompts involving five short source passages on a topic as instruments,Watanabe(2001)explored the evaluation of source-based writing tasks by analyzing three types of data collected from ratings,subjects’texts,and rater reactions by means of Multi-faceted Rash models,think-aloud protocols and interviews.The researcher argued that the products from reading-to-write tasks can be scored in a reliable way and that this type of task was a more reliable and valid measure of writing ability instead of reading ability;accordingly,the scores on the reading-to-write tasks were a comparatively significant predictor of writing ability.Furthermore,he suggested that despite the fact that both the reading and writing measures could interpret 40%of the variance in the source-based writing tasks,the independent reading scores were not a strong predictor of reading-to-write ability.
Given that the integrated writing tasks incorporate other language skills,the differences between conventional writing-only tasks and this task type have aroused researchers’interests.Four representative product-focused studies addressed research questions about comparing scores and discourse features between writing-only tasks and integrated reading-to-write tasks(Brown et al.1991;Cumming et al.2005;Lewkowicz 1994;He&Min 2012).
Brown et al.(1991)conducted a contrastive study on subjects’performance between a response essay with a given reading passage and an impromptu expository essay.This study took writing assignments for placement testing at the University of Hawaii as the research instruments,which was designed for both native and non-native speakers of English.The researchers compared topics and task types,finding that there existed significant differences across topics.But they did not detect significance in terms of task types.This study might be one of the earliest attempts to analyze the differences between reading-to-write tasks and writing-only tasks by focusing on the written products;it was claimed that using a single writing task in a language assessment program was somewhat less valid because test users could probably obtain more accurate evidence of the subjects’writing ability by means of multiple-sample evaluation.However,the researchers failed to investigate subjects’performing processes in responding to different writing prompts.
Lewkowicz(1994)compared a source-based writing task and a writing-only task administered to non-native English speakers.In this study,two groups of undergraduate students in Hong Kong were requested to finish a reading-to-write task and a writing-only task respectively,which were designed on the same chosen topic.The written products from the two task types were scored by means of holistic rating rubrics.Meanwhile,they were measured and compared with regard to length and numbers of viewpoints elaborated.The research findings,to a great extent,were in accordance with the results of the study by Brown et al.(1991).The study did not find significant differences regarding the final scores between the two task types.Moreover,the findings indicated no significant difference in the length of the written products from the two groups.Interestingly,however,the researcher found that the number of points elaborated by the writers for supporting their thesis statements differed significantly.In fact,the group with two reading passages included more points than the other group.Thus,according to Lewkowicz,although source text seemed to provide subjects with more ideas,it might not improve the quality of their written products because the length of the essays across the two groups was not statistically significant.The writers might tend to rely much on the source texts so that this could hinder their development of their own viewpoints.However,this conclusion might not be convincing enough because,on the one hand,the content of the written products was not analyzed thoroughly except the number of the points used in the essays;therefore,the extent to which the idea development differs between the two groups remains vague.On the other hand,since the inter-rater reliability in this research was rather low(only at 0.61),it could be argued that the test scores themselves could be somewhat less reliable.It is necessary to recognize that,furthermore,even though holistic scores have indicated that a strong correlation exists between writing-only tasks and integrated writing tasks,differences obviously exist when subjects’performances are carefully inspected.More importantly,further concrete evidence is needed to indicate how the subjects accomplished the integrated writing task.
Another research conducted by Cumming et al.(2005)served as an in-depth pilot study for the Next Generation TOEFL to investigate the extent to which the written products from two types of integrated tasks,including a reading-to-write task and a listening-to-write task,differ from those of a traditional writing-only task across both scoring levels and task types.The written products were coded in terms of several features:lexical/syntactic sophistication,rhetoric and source use,which the researchers attempted to analyze across different scoring levels.The findings indicated that the products from the two types of writing tasks differed somewhat particularly in respect of complexity,rhetoric,and pragmatics.The researchers,furthermore,found that the written products did show significant difference in grammatical accuracy between the two task types and that they did differ across three scoring levels.By analyzing the textual features across task types and across scoring levels,the researchers provided important discussions concerning the source use texts and suggested that since the written texts produced from the two task types are basically similar in many aspects,the two types could be employed alternatively or even complementarily as source evidence of writing construct.As a result,besides the independent writing task as the only previously used measure of writing on the TOEFL,the integrated writing task type was incorporated at the launch of the TOEFL iBT in 2005.It has been suggested that the inclusion of both task types in practice serves as an attempt to address the concern that making inferences about subjects’writing ability on the basis of a single writing sample might just inadequately reflect the writing construct.In this sense,this research has provided substantial evidence to guide the test development practice.However,it has not touched upon subjects’performing processes or strategy use;the researchers called for further studies on subjects’thinking while performing integrated tasks since it could be informative for indicating to what extent variables may affect the quality of the written products from integrated writing tasks.In other words,balanced research interests in reading-to-write cognitive processes might stimulate future research for the purpose of getting a much clearer picture of subjects’reading-to-write performance.
He&Min(2012)conducted a contrastive study examining to what extent and in what way subjects’linguistic proficiency could have an impact on the quality of their written products in both a writing-only task and an integrated writing task.The researchers selected 104 undergraduates majoring in English at a certain university from mainland China as subjects,who were supposed to perform a writing-only task and a reading-listening-to-write task based on exactly the same topic.The data were collected from the subjects’actual performance and their perceptions of the two task types depicted from the questionnaires.The research results indicated that(1)with regard to overall test performance,low-and high-level subjects performed significantly better in the integrated writing task than the writing-only task,but no significant difference is found among the intermediate-level subjects;(2)with respect to specific text features,subjects from all three levels obtained significantly higher scores in the integrated writing task than the writing-only task in terms of content,lexical complexity,and grammatical accuracy;however,with regard to organizational clarity,no significant difference was reported.The researchers claimed that what integrated writing tasks intend to assess is a dynamic competence involving an interaction of different skills,e.g.,reading,listening and writing.Compared with previous ones,this study made an insightful investigation into the quality of written products from integrated language tasks particularly in the EFL context.As argued by the researchers,however,relevant future studies could take difficulty and quantity of the input as latent variables to examine how they affect the quality of subjects’output.
As noted previously,integrated writing tasks combine multiple skills,such as reading and listening,in one single task that requests subjects either to summarize or to express viewpoints on a topic provided in source texts;accordingly,the task type involves an element not found in prototype independent writing tasks:the use of source texts.Taking 480 samples from the integrated writing section of TOEFL iBT,Gebril&Plakans(2013)studied features of source text use in performances and the degree to which they differ across both score levels and task topics by using a quantitative approach.They employed multiple regression analysis to scrutinize how the following four perspectives concerning source text use predict scores on the integrated tasks:(1)the selection of important ideas from the source texts,(2)the use of ideas taken from source,(3)integrated style,(4)verbatim source use.Results of the research showed that about 55%of the variance in scores was explained by features of source use,among which the use of listening texts and inclusion of important ideas from the reading texts explained most of the variance.Verbatim source use and verbatim source use were negatively associated with the scores on integrated tasks.Source use in integrated writing tasks,as noted by the researchers,
is demanding and requires writers to draw on a number of skills and to make challenging decisions.To complete these tasks successfully,writers have to comprehend the source material in a second language,select important ideas,juggle several source texts,and finally synthesize information from these sources in their writing(p.227).
Source use apparently differentiates integrated tasks from those writing-only tasks,and this study offered valuable insight into the nature of source use primarily from quantitative perspective;however,our understanding of source use in integrated tasks could have been much more complete if it could incorporate relevant qualitative evidence to triangulate the quantitative data.In other words,the generalizability of the findings might be essentially confined to the mere use of quantitative research method.
With the increasingly wide use of integrated writing tasks in large-scale language tests,comparability of prompt difficulty has become a major concern.Typically,prompt comparability is supervised by rigorous attempts at task development that rely on expert judgment.Differences in test taker performance,however,could have been attributed to differences in prompt characteristics across testing administrations.Thus,the relationship between specific prompt characteristics and test taker performance has spurred research interest for the purpose of addressing this concern.A very recent research by Cho,Rijmen and Novák(2013)took prompt characteristics as a latent variable to examine the extent to which they could influence the scores given to the written products in integrated reading-listening-to-write tasks of the TOEFL iBTTM.By concentrating on the subjects’own perceptions on task prompts,the researchers collected data from a questionnaire investigating subjects’evaluation on the task difficulty of 107 previously administrated in TOEFL iBT integrated writing task prompts from 2005 to 2009.Results indicated that part of the variation in the average scores could be attributable to subjects’language ability which might also vary across different test administrations.In terms of task difficulty,two variables,namely clarity of ideas within the prompt and difficulty of topics in the texts,also turned out to be potential sources of variation in the average scores on the integrated writing section.This study offered empirically-supported information to test developers with regard to the issue of ensuring prompt comparability across test administrations in practice.
Another recent research conducted by Sawaki,Quinlan and Lee(2013)examined the factor structures of approximately 446 subjects’responses to TOEFL iBT reading-listening-to-write task.The EFA and CFA results entailed identification of three interrelated and distinct constructs underlying subjects’written products:Comprehension,Productive Vocabulary,and Sentence Conventions.Furthermore,path regression results also indicated that more proficient subjects had significantly better performance on the test task than those from low-scoring group in terms of these three constructs.While this research shed light on the specific perspectives that writing instruction is supposed to entail particularly for test preparation,it failed to investigate the underlying reasons for these differences.
In a word,it should be noted that much of the previous research has primarily relied on the evidence derived from test results,including textual analysis with regard to features of lexicon,syntax,grammar,source use,and quantitative analysis of test scores,without examining the reading-to-write process;accordingly,the results of those previous product-focused studies,have not been able to provide a consistent picture of how subjects actually perform integrated writing tasks.Therefore,there is a need for further studies on subjects’mental activities when fulfilling integrated tasks since it could be comparatively informative to investigate the extent to which variables regarding subjects themselves may affect their performance on integrated writing tasks.Balanced research interests in reading-to-write performing processes,furthermore,might promote research in an attempt to obtain a much clearer picture of the task type.
2.3.3 Process-focused studies of integrated test tasks
The studies reviewed above have only addressed questions through the written products;nonetheless,it is even more important to understand whether the integrated test task measures accurately what is purported to assess by exploring subjects’performing processes.As argued by Bachman(1990),“A more critical limitation to correlation and experimental approaches to construct validation,however,is that these examine only the products of the test taking process—the test scores—and provide no means for investigating the process of the subjects themselves”(p.269).In response to the call for further understanding of integrated tasks from the process-focused perspective,some researchers have begun to investigate how subjects engage with this task type,among whom Lia Plakans has conducted a series of research primarily concerning the differences between reading-to-write tasks and writing-only tasks,the role of reading strategies,and source use.
In her doctoral dissertation,Plakans(2007)compared the differences of subjects’composing processes when they performed a writing-only task and a reading-to-write task by using a qualitative research method based on a grounded theory.What made this study different from others is that subjects’characteristics,i.e.,interest and expertise,were taken as the variables.She found that subjects’interest and previous writing experience might make greater effects on their performing processes in reading-to-write tasks compared with the writing-only task.For those interested writers,with source texts at their disposal,their composing processes proved more constructive.The research findings did not show great difference in terms of process for other writers.In addition,Plakans proposed a working model for composing reading-to-write tasks comprising two stages:preparing to write and writing.
Figure 2.2 A working model for composing reading-to-write tasks
In preparing to write,she held,subjects read the prompt,instructions and the source texts while using various strategies,among which rereading is the predominant one.This stage is quite linear in nature,and subjects move step by step through planning.After planning,they begin the writing stage in which they follow nonlinear processes:planning,rehearsing phrases,rereading source texts,checking linguistic issues such as grammar and lexicon.The writing phase,according to the researcher,is more circular and overlapping.This model shed light on the specific process which subjects engage with while performing a reading-to-write task.Furthermore,her study depicted a fairly complete picture of the differences in terms of composing processes when subjects performed reading-to-write tasks and writing-only tasks.However,this process-based research has taken a qualitative approach involving a small number of participants and the research method itself has own limitations owing to its controlled and experimental nature.
In a refinement of her previous research,Plakans(2009)conducted an inductive study to speculate reading strategies used by 12 non-native English subjects while completing an integrated reading-to-write task.Protocols analysis provided valuable insights enriching our understanding and rationalization of any differences in terms of reading strategy use in integrated writing tasks in ESL context.The data drawn from think-aloud protocols and interviews were transcribed,coded and analyzed,based on which the reading strategies used were classified into five categories including“(a)goal-setting for reading the source texts,(b)cognitive processing,(c)global strategies,(d)metacognitive strategies,and(e)mining the source texts for use in writing”(p.256).The researcher also investigated to what extent the subjects used those reading strategies differently across three proficiency levels.The reported findings indicated that appropriate choice and quantity of reading strategies affect the quality of the written products.It was claimed that subjects of high scoring level have used more reading strategies on the whole.No difference was found with regard to the quantity of strategies used between mid-and low-level subjects;however,they reported using a variety of strategies.Plakans held that integrated writing tasks seem to elicit subjects’strategic competence and that the scoring also supported this fact.As noted previously,however,the generalizability of the research findings might be somewhat confined because of the limited number of subjects.Furthermore,much evidence is needed concerning both reading and writing strategy use in integrated writing tests in either ESL or EFL contexts.
In her follow-up study,Plakans and Gebril(2012)employed a mixed-method approach to investigate how subjects used source texts in a reading-to-write task and how the test scores are associated with these practices.145 subjects were supposed to complete a questionnaire immediately after they finished a reading-to-write tasks.Another nine subjects participated in think-aloud protocol sessions and follow-up interviews.One of the findings was that source use helped generate ideas about the topic and provide language resources.The researchers claimed that proficiency level affected the extent to which subjects could comprehend the source texts among lower-level learners,but it was not found to be associated with the source use functions.
Given the fact that integrated writing tasks are a quite innovative task type in the fields of assessment research and practice,a number of researchers have been attracted to conduct validation research primarily from the perspective of performing processes in an attempt to seek information on how the various integrated components interact with each other.For example,taking Messick’s(1989)framework as the theoretical rationale,Asención(2004)conducted a validation study of integrated writing tests for ESL learners by means of a summary writing task and a reading-to-write task.She obtained similar results in the subjects’cognitive operations in their processes of writing a summary and a response essay respectively.It was found that subjects monitor their processing more frequently in response-essay writing than summary-writing.They also make plans more often on the response-essay writing task than when they engage in summarizing task.Despite the insightful attempts to make a comparison in terms of writing processes between different reading-to-write tasks,more evidence is required with regard to the relationship even between reading-to-write tasks and writing-only task.
Driven by her curiosity about the relationship between integrated writing tasks and writing-only tasks,Delaney(2008)investigated the reading-to-write construct by using two tasks,a summary as well as a response essay both on the same source text,with a sample of 139 participants to explore the extent to which the construct being assessed in integrated test tasks is the simple sum of subjects’reading and writing abilities or an independent construct.Based on the research results,she did not perceive the reading-to-write construct as a unitary one,but rather as:
a dynamic ability that interacts with task demands and individual factors.The ability to write a summary does not necessarily indicate an ability to perform other tasks that combine reading and writing like the response essay.The constructive perspective on reading-to-write emphasizes the interaction of both reading and writing to promote meaning construction;therefore,the reading-to-write ability should be highly related to reading and writing.Findings in this study revealed,however,that reading-to-write scores were weakly related to reading ability(p.147).
The researcher thought that the reading-to-write construct is a unique one weakly related to reading ability seeking for basic information and quite different from writing an essay without background reading support.In addition,she held that the reading-to-write ability differs from writing-only ability since writing content is affected by the information selected from source texts,by the structure of the source text,and by how appropriately subjects connect source information with their own previous knowledge.Furthermore,she maintained that language proficiency and educational level exert a modest effect on the subjects’performance on reading-to-write tasks.However,it seems intuitively evident that NS and ESL learner would perform better than EFL learners and that graduates would score higher than undergraduates.The findings in this regard could be more convincing if the research just focused on one specific group of subjects,e.g.,undergraduates in the ESL context,across different language proficiency levels.Furthermore,this study did not elaborate on the cognitive complexity of the performing processes in reading-to-write tasks,but it might provide more evidence on the effect of individual factors on subjects’performance of integrated tasks.
Kim(2008)made an investigation into the development and validation of a reading-to-write test in the context of ESL diagnostic assessment by utilizing Effect-Driven Argument Structure.A reading-to-write test was developed and administered to ESL students at the University of Illinois at Urbana-Champaign.The researcher adopted a mixed method approach to analyze the data collected from both instructors’and subjects’evaluations regarding the test design,subjects’performing processes,and the relationships among the scores of different test tasks.The findings indicated that reading-to-write tasks developed yielded intended effects involving test users’satisfaction with the test design and the advantageous roles of mediating tasks in the reading-to-write process,and positive evaluation from students on the usefulness of the formative diagnostic reading-to-write test design.
What distinguishes McCulloch’s(2013)study from the aforementioned research lies in the fact that it focused on the reading-to-write practice in real-life context rather than in artificially designed testing tubes.Adopting a qualitative method,the researcher conducted an exploratory study on the reading-to-write processes and source use by taking two MA students as cases in real-life ESL academic writing context.The two students were found to interact with the source texts differently in both the frequency and range of their reading-to-write behaviors.This study made insightful attempts to analyze how reader/writers perform reading-to-write tasks by using qualitative method in real-life settings when they read to write for their Master’s theses.Nevertheless,the researcher just reported the two subjects’behaviors while they read to write,failing to explicate to what extent their cognitive or metacognitive operations exert an effect on their performance.Moreover,despite very innovative attempts to conduct a complete qualitative study in naturalistic academic settings,small number of subjects inevitably might render the generalizability of the findings vulnerable.
In addition to the aforementioned research,the inclusion of mental modality on the exploration of reading-to-write process and its interaction with the reader/writers’performance provides a more complete picture of what this capability involves.Three specific metacognitive strategies have been analyzed,namely,planning,monitoring,and evaluating.Stein(1990b)conceptualized the planning strategy as those operations conducted to regulate such subprocesses in reading-to-write as integration of ideas from source text and memory,utilization of textual features,construction of text,organization of ideas,and attainment of rhetorical goals.Planning is regarded as a key element in terms of text construction.Hayes and Nash’s(1996)research is a case in point.They proposed a model of writing planning which comprises abstract planning by which the writer generates ideas and makes decisions concerning rhetorical problems,as well as language planning that can reflect how the writer articulates ideas by using various linguistic resources.The abstract planning claimed by Hayes and Nash is,to some extent,consistent with the purposes of planning outlined by Stein(1990b).Moreover,reading-to-write research has correlated planning processes carried out by reader/writers with“abstract planning”,just as explained by Hayes and Nash(1996).For example,Ruiz-Funes(1999a)held that planning pertains to“planning of the text”which is going to be written by estimating the rhetorical requirements of the written task,while Sarig(1993)argued in his study of summary writing that planning is primarily concerned with both goal-setting and strategy-selecting awareness in order to achieve the goals set for reading as well as writing.
Planning was generally viewed as reader/writers’mental operations when performing integrated tasks,such as summarization,analytic essays,or reading reports.Some researchers thought the planning operations are restricted to certain stages of task performance like after-reading and pre-writing processes(Kennedy 1985),whereas others suggested that planning prevails in the whole task(Flower et al.1990);still others classified planning operations into different levels to distinguish the local planning of the immediate text from the overall planning of the task,the text organization,or the connections among information chunks(Durst 1987).
Planning operations were also reported in the previous ESL research literature of reading-to-write studies.By making use of protocols,Ruiz-Funes(1999a)found that ESL learners engaged in planning while composing text from sources.Yao(1991)thought that planning was closely associated with the process of text writing itself;other research,on the other hand,reported that planning was more correlated with the whole task performing process(Sarig 1993).Furthermore,planning was believed to be related with other processes such as organizing(Yang&Shi 2003),and monitoring(Endres-Niggemeyer et al.1991).
In the literature concerning reading-to-write tasks,monitoring is the second most frequently explored metacognitive strategy since reader/writers extensively reported instances of this operation in previous studies.
Monitoring has been recognized as a significant metacognitive strategy in L1 reading-to-write studies.Durst(1989)extensively analyzed monitoring operations in terms of the awareness of different aspects such as text,prior knowledge,effectiveness of plans,etc.Other research considered monitoring an evaluative component(Kennedy 1985;Langer 1986),a means by which reader/writers check the comprehension of what has been read(Flower et al.1990;Penrose 1992),or an executive awareness to control features of task performance.
Devine(1993)claimed that monitoring entailed the awareness to evaluate the features of the information required as well as the processing demands of the task.Awareness of task features and requirements is an indispensable component in academic writing stage,knowledge-transforming,as suggested by Bereiter and Scardamalia(1987a).They referred to knowledge transforming as a complex problem-solving paradigm involving skilled writers’planning and goal setting to exert effective control over two specific problem-solving contexts(i.e.,content problem and rhetorical problem).In other words,reading-to-write can be understood as a problem-solving situation in which the reader/writer is supposed to reflect upon in what way the ideas generated from the source and for the text should be organized,selected,and connected.
Cumming(1989)thought that monitoring was broken down in problem-solving behaviors,while Endres-Niggemeye(1991)held that it should be labeled as control.Still other researchers,such as Sarig(1993),claimed that monitoring is mainly concerned with the operation of assessing in reading-to-write.Other researchers figured out that such operations as rereading,revising,recognizing task problems also fall into the category of monitoring depending on the context for the reading-to-write tasks(Esmaeili 2000;Yang&Shi 2003).
Evaluating has been recognized as a third type of metacognitive strategy in reading-to-write research.Esmaeili(2002)made an investigation into ESL adults’strategy use when the reading and writing components on the test were thematically integrated.The data from interviews as well as retrospective questionnaires indicated that many subjects reported reconsidering accomplishment of previous goals,planned thoughts,written texts as well as global and local changes undertaken to written texts,which could be attributed to the use of evaluating strategy.
In his contrastive study on writing-only and integrated writing tasks,Esmaeili(2000)found that subjects fulfilling the integrated writing task reported a repository of strategies such as considering task requirements,considering content,monitoring text composing,adjusting plans and styles,revising arguments,and generating additional content.He held that some strategies could be deployed in both two task types,while others were exclusive to integrated writing tasks(e.g.,borrowing language use).Moreover,he justified the necessity for a framework that depicts the learning and writing strategies employed by subjects.In this regard,more qualitative analyses on subjects’verbal reports in the completion of reading-to-write task could shed light on the expected associations between the theoretical framework and what actually happens when fulfilling the task type in an assessment context.
It could be easy to recall that the constructivist approach thought that reader/writers perform textual transformations by conducting operations of organizing,selecting,and connecting.This view,nevertheless,fails to explicate the underlying metacognitive mechanisms.Metacognitive view could,to some extent,make up for this limitation since we could get a much clearer picture of how reader/writers engage in the reading-to-write process by explicating what help them become consciously aware of task goals,resource evaluation,and orchestration of the use of strategies they could use.
In summary,taking into account both theoretical and empirical perspectives of the tasks,reading-to-write construct could be interpreted as the ability to perform tasks in which the information from source text(s)is utilized for the purpose of constructing new texts to accomplish reader/writers’communicative goals.The meaning-constructing process indispensably involves the metacognitive operations such as planning and monitoring to control the cognitive processes(i.e.,organizing,selecting and connecting)in which the information in the source text is used and a new text is produced.While the studies reviewed above provide insight into their processes or strategies use for reading-to-write tasks,reader/writers’metacognitive operations in response to integrated tasks still remain comparatively unexplored in terms of EFL writing and language testing.This study attempts to explore subjects’metacognitive strategy use when interacting with a source text and their own texts.
The review of literature indicates that strategy use is an abstract construct in both psychology and language assessment,a pertinent question would,therefore,arise,“How do researchers elicit it?”