Contents List of Figures ix List of Tables xi Foreword by Anne Herrington xiii Acknowledgments xix 1 Teaching, Administering, and Supporting Writing at the O AL State Comprehensive University 3 TI I Why State Comprehensive Universities? 3 U ER N State Comprehensive Colleges and Universities: How They Fit in the Higher Education Landscape 4 IB AT Study Design 5 TR M The Value and Limitations of the Bird’s-Eye View 8 Chapter Overview 12 D ED 2 Assessments of Writing Studies’ Practices: 1927 to R HT the Present Study 18 IS FO IG 1927: Warner Taylor Establishes Survey Methodology and Major Questions 19 T R 1955: Emerson Shuck Advances Methodologies and Begins Study of O Y Writing Program Administration 19 N OP 1960s: Researchers Vary in Methodology but Unite in Leveling Sharp Criticism 20 C 1974: Ron Smith Enlarges the Sample but Remains a Bleak Prognosticator 23 1980s: Witte and Peterson Highlight WPA Issues and Opportunities, and Burhans Bemoans Lack of Pedagogical Advancement 24 1994: Larson Focuses on Lack of Consensus and Assessment 27 Twenty-First Century: Listserv Surveys of WPA and Writing Faculty Experiences and Perceptions 27 Social Science versus Humanities: The Report versus the Argument 29 Nonempirical Assessments of the Field from Scholars at Leading Institutions 30 Fulkerson and Haswell: Regret for the Field’s Lack of Unity 32 vi C ontents The Contemporary Context: Public Higher Education under Siege? 37 3 The Back End of First-Year Composition: Institutional Support through Infrastructure and Policies 44 Broad View of Writing Infrastructure at SCUs 45 Institutional Home for First-Year Composition: English Dominates 45 Drilling Down: The Conditions that Correlate with Institutional Location 49 The Rise of the WPA? 51 Do WPAs Matter? 53 Staff or Faculty? Rank? Does Status Matter? 54 O AL Graduate-Student and Adjunct WPAs 57 TI I Who Teaches FYC? 58 U ER N FYC Faculty Development and Training: Uneven and Unpredictable 63 IB AT Course Standardization through Syllabi 67 TR M Class Size: Broad Range, with Significant Correlations by Region, Presence of Special Populations, and Presence of Graduate D ED Programs 68 Basic Writing: Persisting at SCUs 72 R HT Basic Writing: Class Size 75 IS FO IG Basic Writing: Placement Practices 75 Basic Writing: Credit and Exit Assessment 79 T R O Y Exemptions 79 N OP Institutional Supports for First-Year Composition 80 4 What Are We Doing with First-Year Composition? 85 C First-Year Composition Is Required 86 First-Year Composition Outcomes 94 What Happens in FYC? 98 Research Instruction and Writing in First-Year Composition 99 Course Topic: Unspecified Beats Out Literature and Research 102 Honors Options for FYC 106 Approach to Teaching Writing 107 Process Writing 108 Writing as Mastery of Skills 113 Skills and Grammar 114 Areas of Emphasis in FYC: Argumentation 117 Contents vii Teaching Writing as a Rhetorical Act 120 WPA Influence on Instructional Approach 121 5 Beyond First-Year Composition 125 Writing across the Curriculum: State Comprehensive University Study in Context 126 Writing beyond FYC in the Study 127 Writing beneath University Requirements 132 Variables Present and Absent at WAC Schools 134 Requirements beyond FYC 137 Writing Centers 139 Programs in Writing: Concentrations, Minors, and Majors 143 O AL Early Proponents of Vertical Writing 144 TI I Discontentment with English: The Case for Independence through U ER N Disciplinary Legitimacy 146 Twenty-First-Century Writing Programs: Arrived? 147 IB AT State Comprehensive University Study Results: Majors Rare, Smaller TR M Programs Common 151 Writing Majors: Associations with Other Variables 153 D ED 6 Writing at the State Comprehensive U 159 R HT IS Next Steps: At Our Colleges and Universities 163 FO IG Next Steps: National Level 165 T R Appendix A: Methods 169 O Y Selecting the Sample 170 N OP Variables 173 Data Collection: Emphasis on Publicly Available Information 174 C Data Analysis 178 Statistics 179 Limitations 181 Directions in Writing Studies Research 182 Appendix B: Survey 184 Appendix C: Coding Sheets 190 Appendix D: List of Variables 193 Appendix E: Sample List 204 References 206 About the Author 222 Index 223 1 Teaching, Administering, and S upp o r t i n g W r i t i n g a t t h e S t a t e Comprehensive University O AL W h y S tat e C o m p r e h e n s i v e U n i v e r s i t i e s ? Writing at the State U: Instruction and Administration at 106 Comprehensive TI I U ER Universities1 presents a detailed, contextualized, and empirical analysis N of the state of writing programming at four-year state comprehensive IB AT universities, a broad classification that includes research universities, MA-granting universities, and BA-granting colleges. The idea of this book TR M began with an idea for another book: I wanted to write about the chal- D ED lenges but also possibilities for great writing instruction and support at US state comprehensive universities (SCUs),2 as this was a subject with which I R HT was deeply, and personally, familiar. I believed I had figured out how to be IS an effective writing program administrator (WPA) at my school, Montclair FO IG State University in New Jersey, although it had taken close to a decade of T R hard work to create, organize, and support writing curricula, program- O Y ming, and approaches to staffing and faculty development of which I N OP could be proud. Along the journey I had often felt apart—and sometimes excluded—from the scholarly conversation on writing program adminis- C tration, as it was so often set within the context of the research university or, less frequently, the small college. I received invaluable support from the WPA listserv and from conference conversations where WPAs from SCUs abound. But my long and often lonely journey to develop a strong and well-regarded writing program at an SCU made me want to reach out and provide support to writing faculty and WPAs in similar situations and also to graduate faculty at research universities whose preparation of these faculty is limited by their own research-university contexts. From my conversations with WPAs and writing faculty at SCUs, I know many won- der how they can shape a good program without what the doctoral pro- grams they had graduated from had been equipped with: graduate stu- dents to teach the majority of the classes (and who could be required to take a graduate class in writing studies); a staff of directors, coordinators, DOI: 10.7330/9781607326397.c001 4 T eaching , A dministering , and S upporting W riting and secretaries; and a cohort of writing studies colleagues to work with, among other assets. The book I thought I’d write was inspired by my wish to show what could be done. (In fact, a lot can be done, and many depar- tures from what is possible at a research university actually amount to a superior writing experience for the undergraduates we are pledged to serve because comprehensive universities, like BA-granting institutions, are typically less beholden to research and doctoral-education impera- tives that can deemphasize undergraduate education). However, I soon realized that what I really knew was what I had done at Montclair State University. I had a great case study. But I didn’t know much about what was happening in other writing programs at other SCUs that weren’t specifically represented in the scholarship or run by O AL personal friends. With experience working with my colleague Melinda Knight on a study that used publicly available information to study writ- TI I U ER ing at 101 top universities, I believed publicly available information N would allow me to sample and explore a large number of SCUs so as IB AT to draw a much fuller, albeit bird’s-eye, portrait. With Melinda, I had found that much can be discovered about how an institution teaches TR M and administers writing by combing carefully and systematically through D ED publicly available information. With these goals and primary method established, I developed these research questions: R HT IS 1. To what extent have established principles and practices of writing FO IG instruction and administration been implemented at state comprehen- sive universities? T R O Y 2. In what ways is writing instruction at state comprehensive institutions, as N OP a class, different from writing instruction at other classes of institutions, and from writing instruction at different historical time periods? C 3. How are the major scholarly debates in FYC instruction and WP administration reflected—or not—at state comprehensive universities? My strategy for investigating these questions was to collect existing data that would reasonably be available at all institutions in a large sample, from catalogs and other publicly available data, to get a robust, bird’s-eye view. But first I had to select a sample, thus raising the ques- tion, what is a state comprehensive university? S tat e C o m p r e h e n s i v e C o l l e g e s a n d U n i v e r s i t i e s : H o w T h e y F i t i n t h e H i g h e r E d u c at i o n L a n d s c a p e The category of SCUs, also called regional public universities, is fairly broad, including selective state institutions (e.g., James Madison Teaching, Administering, and Supporting Writing 5 University in Virginia), large research universities (e.g., Texas A&M, Northern Arizona University), and even very small institutions (e.g., University of Maine at Presque Isle and Mayville State in North Dakota). As an institutional class, the SCU is subject to less scholarly attention than is the flagship state university, yet according to the association that represents SCUs, the American Association for State Colleges and Universities, collectively, SCUs enroll 3.8 million students, occupying a kind of middle ground within the public-education landscape between the most elite research university and the small private college or the community college. Informally, the SCU is well known in higher edu- cation and to the public; there are approximately 420 such institutions nationwide (American Association of State Colleges and Universities O AL 2010).3 Thus, greater understanding of writing programming at SCUs is valuable not only to the institutions that fall under this classification TI I U ER but also to higher education and writing researchers who wish to under- N stand the state of college writing in the country today. Surprisingly, to IB AT date, in the robust and expanding body of scholarship devoted to writ- ing program administration, no writing scholar has specifically attended TR M to SCUs as a class, even though these institutions grant baccalaureate D ED degrees to half4 the students enrolled in public US four-year colleges and universities and 28 percent of all students attending private or R HT public four-year colleges or universities (American Association of State IS Colleges and Universities 2014, 11). FO IG T R O Y S t u dy D e s i g n N OP Institutional class selected, I designed my methods for investigation. Following Richard Haswell’s (2005) call for empirically based scholar- C ship and bolstered by Dan Melzer’s (2009, 2014) work analyzing writing assignments through an Internet-based search process, among others, I sought a method that would enable me to speak broadly about national trends. Although researchers in writing studies have historically devel- oped samples by sending out invitations to participate in a survey, for this study I chose to select a representative sample and gather most of my data independent of these participants. In choosing a master list of insti- tutions to sample from, I selected the membership list of the American Association of State Colleges and Universities (AASCU), as I wanted a list that would provide a cross-section of colleges and universities that repre- sent a broad and diverse range of public four-year institutions. AASCU schools range from student enrollments of 845 to 58,000, with an average enrollment of 10,430; collectively, they are responsible for educating 51 6 T eaching , A dministering , and S upporting W riting percent of all minority students and 48 percent of students who enroll in public four-year institutions (American Association of State Colleges and Universities 2014, 9–11),5 facilitating my goals of discovering what is happening in the vast middle of public, four-year higher education. The 106-institution sample was pulled from the AASCU list randomly after stratification by region and size.6 (I aimed for a sample of about 100 and ended up with 106, as this was the number that allowed for statistical rep- resentation of the sample in respect to region and size). Building on pre- vious researchers’ methodologies (Burhans 1983; Sideris 2004), during the fall of 2011, I collected all catalogs or bulletins and searched insti- tutional websites to find documents that provided answers to the vari- able list I had identified, drawing on the methodologies developed with O AL Melinda Knight for the “top university study.” Thus, for each institution I have a host of assessment reports, captured websites, and schedule snap- TI I U ER shots along with official catalogs or bulletins. Data were located in similar N places: catalogs first and foremost but also Institutional Research reports, IB AT department and program websites, assessment units’ publications, and registrar documents and reports. This primary data set was then ampli- TR M fied by a survey distributed to identified and confirmed leaders at each D ED of the schools; this method provided some additional data and allowed for triangulation through cross-checking. R HT Preliminary data gathered, to develop specific questions and a first IS draft of categories to use for sorting data pertaining to these questions, FO IG I drew on previous state-of-the-field studies, of which there are many, T R beginning with Warner Taylor in 1929 (see table 2.1 for a comprehen- O Y sive list of studies). For some areas of inquiry, scholarship within the N OP field prompted me to develop additional categories (e.g., prominent discussion of the writing-about-writing movement led me to include C this category). These initial drafts of variables and associated catego- ries (or values), developed prior to data collection, were expanded and revised significantly as I collected and reviewed the data (e.g., I added categories in placement methods, such as the international baccalaure- ate, as the data taught me about possibilities that previous researchers hadn’t discussed.) Thus, like many other writing studies researchers (e.g., Barton and Donahue 2009; Brandt 2014; Dadas 2013; Gladstein and Regaignon 2012; Purcell-Gates, Perry, and Briseno 2011), I was guided by a grounded-theory approach (Birks and Mills 2015; Glaser and Strauss 1967). Through a process of moving back and forth from research on other data sets to reviewing data I had collected, I created a list of 148 variables to guide my further data collection. The variable list is too lengthy for inclusion here, so it can be found in appendix D. Teaching, Administering, and Supporting Writing 7 I have arranged the variable list with notations that explain my sources for each variable, notations about my sources, and notations about how conflicts were resolved when conflicts existed between two sources (e.g., between the catalog and the survey response). In the presentation of my study discoveries, I compare my findings to those presented by previous researchers (e.g., Burhans 1983; Gere 2009; Kitzhaber 1962, 1963; Larson 1994; Moghtader, Cotch, and Hague 2001; Smith 1974; Taylor 1929; Wilcox 1968, 1969, 1972, 1973; Witte, Cherry, and Meyer 1982; Witte et al. 1981; Witte, Meyer, and Miller 1982). In my selection of data, I have sought out a range of evidentiary points that help build multifaceted, measurable constructs for understanding FYC instruction in the sample, similar to the construct-representation work O AL undergone to develop The Frameworks for Success in Postsecondary Writing Instruction (O’Neill et al. 2012), which constructs writing success as gained TI I U ER from the development of necessary “habits of mind” and also from experi- N ences with a variety of “writing, reading, and critical analysis experiences” IB AT (O’Neill et al. 2012). Variable types include the following range: TR M Quantitative variables—for example, how many courses in FYC are D ED required? Categorical-descriptive—for example, what is the title of the WPA’s R HT position? IS Categorical-dichotomous—for example, is there a WAC requirement? FO IG (yes or no) T R Categorical-nominal variables—for example, what instruments or meth- O Y ods are used for placement decisions? N OP Categorical-ordinal variables (variables that can be ranked)—for exam- ple, what level of autonomy do faculty have in designing syllabi? C Data collected, I worked systematically, alone or with a research assistant, to code the data. Many of the data were quantitative or of a categorical nature that required little interpretation. However, where interpretation was required, I worked with a research assistant, blind double coding and then discussing the cases we disagreed on. More details about the data collection and coding methods are provided in the appendices (see apps. A and C), though I’ll make two other points. First, my effort in data gathering was to lean toward the simplest, most reliable data points by asking questions for which the responses were indisputable—numeric or categorical. Second, I ventured into interpre- tative areas with care and some ground rules: when I made judgments regarding the emphases of courses and programs as determined by reading course descriptions and outcomes-related documents, I coded 8 T eaching , A dministering , and S upporting W riting this data with a second coder with coding sheets that were pretested for high interrater reliability. Data coded, I conducted my analysis through both quantitative and qualitative means. The quantitative analyses begin with reporting fre- quencies but also with identifying and reporting associations among variables, allowing readers to see when conditions converge. Thus, through the help of basic descriptive and inferential statistics, I was able to conduct association, correlation, and cross-tabulation analyses to pro- vide indicators of the variables that typically were present when practices were consonant with those advocated for in the literature and by our national organizations, and to show when they were not. For example, the presence of a tenure-track WPA is positively associated with smaller O AL class size, an emphasis on rhetorical instruction, and professional train- ing of writing teachers. Along with presenting correlations and asso- TI I U ER ciations within the data and against data pulled from the Carnegie N Classification database (Carnegie Foundation 2011), in my analysis I IB AT compare my findings to relevant studies on the state of the field, along with findings from a 2010 study (data-collection date) I conducted TR M (Isaacs and Knight 2012, 2014). In this study, which I will refer to as the D ED top university study, Knight and I collected data on 101 four-year colleges and universities, across institutional type, based on a selection derived R HT from the annual rankings published by US News and World Report. IS FO IG T R T h e Va l u e a n d L i m i tat i o n s o f t h e B i r d ’ s - E y e V i e w O Y In the last decade, the discipline has embraced empirical methodologies, N OP both close, careful studies by researchers such as writing center schol- ars Isabelle Thompson and Jo Mackiewicz (Mackiewicz and Thompson C 2013, 2014; Thompson 2006, 2009; Thompson and Mackiewicz 2013; Thompson et al. 2009) and what I call a bird’s-eye view by WAC research- ers like Thaiss and Porter (2010) and Melzer (2014) and by the team of researchers working on the ongoing WPA/WCD (writing center direc- tor) study (Fralix et al. 2015). Bird’s-eye studies in our field typically use surveys and self-reporting as their primary means for data collection. Mass survey was quite possibly the best option for many years, and there has been so much published research based on voluntary survey that the method is seldom questioned. Yet, this method relies on informa- tion provided by interested, willing parties. Does the investment of these individuals matter? Does this investment skew the results? These are the questions I asked when I designed this study. SCUs tend to include many colleges and universities that will not be represented on these surveys Teaching, Administering, and Supporting Writing 9 because they do not have WPAs or other faculty and staff who partici- pate on the WPA and similar listservs and who are willing to answer the many surveys distributed through these listservs. I think any real under- standing of the impact of our field requires that we gather and report on what is happening at institutions that are not part of our community as defined by membership in one of our field’s organizations. Thus, to avoid the problem of the self-selection skew, essential to my study were two decisions: to random sample and to gather data through publicly available information. Thus this study is most similar to those conducted by Witte and col- leagues, and also by Ron Smith, studies at least twenty years old (Larson 1994; Smith 1974; Witte et al. 1981; Witte, Cherry, and Meyer 1982; O AL Witte, Meyer, and Miller 1982). I was interested to see any evidence of recent changes to the higher education landscape, such as the increase TI I U ER in reliance on contingent faculty; the rise in assessment and accountabil- N ity; the development of extensive university and college websites; and IB AT the 2008 reauthorization of the Higher Education Act that requires col- leges and universities to publicly (via the Internet) report course sched- TR M ules, transfer policies, and other relevant information (Tromble 2008). D ED We are more public with our practices and processes, as the interest in accountability has risen with concerns over debt and graduation rates, R HT giving rise to increased federal investigation and oversight and require- IS ments for increasingly public reporting by colleges and universities FO IG (White, Elliot, and Peckham 2015, 12). T R There are drawbacks to the bird’s-eye approach as well. First and O Y foremost, the approach precludes a close view, so texture, details, and, N OP most of all, explanations for choices made are not provided. This study does not tell you why phenomena have occurred; it simply tells you what C has occurred. For textured close studies of the challenges of practical implementation of best practices in the field, we are fortunate to have many and varied case studies and multi-institutional studies to review. Second, the analyses, and the conclusions I draw, are based on what can be found through public-document and website review. Some of these conclusions are more subject to debate than others based on the nature of the inquiry. Thus, for example, class size based on the numbers I gath- ered from looking at actual course schedules strikes me as pretty close to indisputable, with the caveat that the data are tied to one specific semes- ter. However, I also report on such matters as emphasis on rhetorical instruction and argumentation in FYC, relying on course descriptions from official catalogs and outcomes-type statements found in the cata- log, in reports, and on university and college websites. I imagine faculty 10 T eaching , A dministering , and S upporting W riting might quarrel with the findings here, noting that these public docu- ments are either out of date, political, or simply do not tell the whole story. These are reasonable points to make, so readers should remain aware that the story here is based on the public record. Quite simply, the quality and quantity of data provided in reliable public sources shaped my study greatly. This study does not benefit from the kind of insider and deep knowl- edge that an interview or review of a range of syllabi would present. However, as Clinton Burhans asserted thirty years ago, from this kind of information (today electronically mined, in his day sent by post mail) we can find “an informative and reasonably indicative body of data” (Burhans 1983, 640). I have read and coded hundreds of these institu- O AL tional documents, with support from a second coder when data required any interpretation or judgment. Through this process I have come to TI I U ER value the powerful signaling that phrases such as these indicate: writing N as a process; rhetorical awareness and purpose and audience on the one hand IB AT and effective written products, fundamentals of writing, and grammar, usage, and mechanics on the other. All these phrases, and ones like it, were found TR M throughout the data, again and again, contributing to readers’ sense of D ED the construct of writing each institution is presenting. As explained by Edward White, Norbert Elliot, and Irvin Peckham, “[A] writing construct R HT is that stated conceptualization of writing that informs a writing program IS and its assessment” (White, Elliot, and Peckham 2015), though it is also FO IG clear that institutions articulate writing constructs repeatedly, and while T R my methodology captured several of those articulated constructs, it only O Y captured those that are part of the public record. N OP Representatives at SCUs have also reminded me of the varying degrees to which outcomes for their courses are made public via their websites, C suggesting another limitation (that is lessened by the survey). As another prime example of a limitation, originally I planned much greater investi- gation of writing centers, but little material about writing centers is con- sistently available in institutional catalogs or in other public documents. Whereas one institution would present a writing center’s mission and approach, another one would only inform external readers of its services. In my discussion of data points, I have described the sources of my data in an effort to invite readers to consider the limitations of the findings. Nonetheless, I think there is value in what I have found, for in this age of readily available information via the Internet, and of public and govern- ment pressure on higher education to be transparent and accountable, what can be gleaned about writing instruction through public, official, and available documents is increasingly extensive and important. Teaching, Administering, and Supporting Writing 11 Beyond limitations due to methodology, this research is limited by my own point of view, as is always the case with human research. A brief discussion of my own standpoint and the biases, assumptions, and gaps these generate is necessary, particularly because I come to this project with an argument about the collective skew of previous studies toward writing studies’ “insiders” and therefore weighted against SCUs. First, I have worked at New Jersey’s Montclair State University, an SCU, for twenty years. I am one of those academics who has never intellectu- ally strayed too far from her graduate-school roots. I was trained at the University of Massachusetts Amherst, where I studied with Anne Herrington, Charles Moran, Peter Elbow, and Marcia Curtis, who taught from a strong commitment to basic writing, first-year composi- O AL tion, process writing, WAC, humanistic assessment, and a belief that writing is always personally and individually inflected. As Herrington TI I U ER and Curtis wrote in Persons and Process, their study of four students’ N experiences with college writing, “Students . . . use the drafting process IB AT as much to configure their identities in relation to their various subjects as to master the forms, genres, and language in which these subjects TR M were conveyed” (Herrington and Curtis 2000, 383). I suspect that my D ED principle ideas about what effective writing instruction and administra- tion look like have been deeply informed by those experiences: thus I R HT am positively inclined toward FYC, process writing, and WAC, and I am IS suspicious of and disinclined toward assessment methodologies and FO IG writing instruction that removes the subjectivity of the writer from writ- T R ing. Beyond formal schooling, the people I have always been around— O Y at home in suburban Boston, and then at UMass and Montclair, have N OP taught me the (not exclusively) northeastern values of irony, skepti- cism, moral responsibility, the Western intellectual tradition, and— C hard to admit—perhaps intellectual superiority. I work against the lat- ter especially, but it really is in the drinking water I’ve been sipping for fifty years. More personally, I am a middle-class, middle-aged, straight white woman with three sons still some years from college. Over the years I think I have become more conservative in some ways (for which I blame parenthood principally), but both personally and profession- ally, I haven’t changed at all in respect to my deep, practically instinc- tive interest in working to advance the possibilities of those who, by the luck of birth, face a more difficult road in terms of education and eco- nomic and social possibilities. Finally, in the five years I have worked on this project, I have gone from WPA to English department chair to associate dean. I hope the move up the administrative ladder has served to make me more familiar with the context in which decisions 12 T eaching , A dministering , and S upporting W riting about teaching and administering writing programs are made, though I am aware that there are readers who believe little is gained and much is lost when an individual moves from teacher to administrator. I think my standpoint, which I have surely not adequately explored, shaped the questions I asked in this study in large and small ways: from the sample selection and focus on SCUs to my deep attention to FYC and my deci- sion to look for—although I did not find it—evidence of expressionism in these schools’ approaches to FYC. In summary, while this study is in the vein of many earlier studies, per- haps Witte and colleagues’ most of all (Witte, Cherry, and Meyer 1982; Witte, Meyer, and Miller 1982; Witte et al. 1981), Writing at the State U provides historical context while capitalizing on publicly available data O AL and fairly simple statistical analyses that have not often been used by researchers who have conducted “status” research of this nature. TI I U ER N IB AT C h a p t e r Ov e r v i e w I have often been asked to characterize my findings along the lines of TR M this question: is the news for SCUs good or bad? Judgment—good or D ED bad—depends on what your priorities are. For example, do you care most about class size or instructional approach? Assessment methodolo- R HT gies or the extent of faculty training? The presence of WPAs or reliance IS on adjunct faculty? An increase in writing majors or the presence of FO IG any kind of vertical writing programming? Further, even within a single T R area of inquiry, whether or not you perceive the findings as good or O Y bad depends on your level of optimism or pessimism about the practi- N OP cal possibilities of implementation of best practices in the contempo- rary landscape of public higher education, which is, as always, limited C by either budgetary priorities or budgets themselves and a culture that historically hasn’t focused on writing instruction, much less seen writ- ing studies as a discipline. If you come to this book as an optimist, you’ll read much herein that will disappointment you; but, if you are more pessimistic, I think you’ll find much that pleasantly surprises. I myself vacillate between these attitudes: on the one hand, I went into this study worried that the majority of SCUs had been largely left out of the writ- ing revolution of the last half century (not the case!), and on the other hand, based on my own experience, I was aware that radical change was possible. So look to the findings of this study with an awareness of your own dispositional instincts and also, perhaps, of your subjective position as determined by what kind of institution you have worked in, as likely those local experiences deeply affect what you expect to be Teaching, Administering, and Supporting Writing 13 possible elsewhere. A brief story to illustrate that point: recently I had the opportunity to spend time with faculty at an Ivy, and when the dis- cussion turned to graduation rates for undergraduates at US colleges and universities, otherwise well-informed faculty from this institution could not believe me that national six-year graduation rates were about 60 percent. From their vantage point, anything less than 90 percent for four-year graduation was unacceptable, and also unfathomable. In the chapters that follow, I present my findings not only in the con- text of the historical record and previous studies, detailed in chapter 2, but also within the often-heated theoretical debates that surround the data. Thus, within discussions of the major findings, I provide a snapshot history of the debates that surround the major findings, which I think O AL will provide valuable understanding to readers. Two examples should be useful: the question of the role of literature in first-year composition TI I U ER and the question of independence for the discipline of writing studies. N Some readers might think that the issue of literature in composition is IB AT a dead one, resolved long ago, but the issue will simply not be put to rest, as evidenced by the presence of literature in composition courses TR M (see chapter 3) and also the currency of the question in the scholarly D ED research (just published in the last decade include Anderson and Farris 2007; Bergmann and Baker 2006; Foley and Huber 2009; Isaacs 2009; R HT Raymond 2010). For readers who have not had first-hand experience IS with a comp-lit war, some sense of the history of this debate is useful in FO IG understanding its importance and why it remains an issue. Similarly, the T R issue of writing studies independence has been the subject of journal dis- O Y cussion since at least 1975 (Tade, Tate, and Corder 1975), and the ques- N OP tion has generated reams of scholarship (see, for example, Balzhiser and McLeod 2010; Carpini 2007; Cushman 2003; Estrem 2007; Farris 2013; C Giberson and Moriarty 2010; Howard 2007; Lowe 2007; Mendenhall 2013; National Council of Teachers of English 2009; O’Neill, Crow, and Burton 2002; Schumaker 1982; Scott 2007; Shamoon and Martin 2007; Taylor 2007; Weisser and Grobman 2012). Yet, as is made clear in chap- ter 5, scholarly discussion (fulsome and promising) and practice (not much happening) can be far apart. Readers who, understandably, simply need answers are welcome to skim through the book for figures and tables (listed after the table of contents) or to simply turn to the chapter-end summary boxes of statis- tics I think are most likely to be useful on a practical level. The chapters are written as stories of what has happened all around the data—the issues and the other studies—but there are times when an administrator simply needs the bottom-line statistics. 14 T eaching , A dministering , and S upporting W riting In chapter 2, “Assessments of Writing Studies’ Practices: 1927 to the Present Study,” I give a historical review of many of the studies that pre- cede this one, tracing the development of questions and concerns that have occupied scholars. These include questions about FYC course focus, approach, and staffing, the status of the faculty and administrators who support FYC, and such issues as placement and course substitutions, as well as institutional issues such as the placement of writing studies within a college or university. Throughout these studies are references to fis- cal support and the great challenge of finding appropriate funding for teaching and supporting writing. It’s a recurring theme: anticipating, surviving, or recovering from budget crises. This theme gives rise to the final substantive section of the historical review—understanding the O AL last twenty-five years of funding for public higher education. While writ- ing studies scholars typically do not go beyond the popular press or the TI I U ER Chronicle of Higher Education to understand public higher education fund- N ing, my review of National Center for Education Statistics (NCES) and IB AT State Higher Education Executive Officers (SHEEO) research studies as well as analyses by higher education economists suggest to me that we are TR M missing part of the picture: while support for public higher education has D ED undoubtedly decreased, tuition increases have quite ably enabled insti- tutions to maintain the same relative per-student dollars for spending, R HT deeply discounting claims that the “current” state of affairs for funding IS writing studies staffing is merely the effect of budget cuts. I urge readers FO IG to wade into the perhaps unfamiliar waters of NCES and SHEEO reports T R and the Journal of Higher Education or, at the very least, to pay attention to O Y this review in chapter 2. We must understand and broadcast that this state N OP of affairs of, most pointedly, relying increasingly on adjuncts is not merely a case of decreased dollars but is rather the effect of priorities. C In chapter 3, “The Back End of First-Year Composition: Institutional Support through Infrastructure and Policies,” I report on the institu- tional structures that undergird what colleges and universities are able to do with FYC. Faculty may have little or no influence in the matters of class size, administrative release for writing program administration, options for staffing, and the ability to offer students credit-bearing basic writing instruction, despite the importance to these matters of teaching writing well. These structures and policies are typically decided upon at college, university, or even state levels and not at the programmatic or departmen- tal level, though they deeply affect what faculty are able to do in terms of writing instruction. I report on infrastructure and support provided for FYC through discussion of FYC program location, WPA administra- tor status, and course staffing, looking at these findings in comparison to Teaching, Administering, and Supporting Writing 15 previous studies, other populations, and in respect to institutional type. In the area of assessment, I look at placement and exemption provisions at these schools, finding that there are many more options than I believe many of us realize—these agreements and arrangements being made at a high institutional or state level. In this chapter, significant differences are shown to be associated with institutional size and institutional selectivity, with FYC students in nonselective colleges and universities most likely to be taught by a tenure-track professor. In chapter 4, “What Are We Doing in First-Year Composition?,” I exam- ine the impact the field has on general education writing instruction at SCUs. To report on the contents of first-year composition courses, I relied on catalogs, websites, and official institutional documents and also on O AL representatives’ reports of practices gathered through the survey, as dis- cussed in appendix A. I report on SCUs’ FYC requirement: its presence TI I U ER and absence, as well as instructional emphases, number of courses, and N use of outcomes statements or other articulations of student-learning IB AT expectations. I report on what these SCUs are doing in respect to research instruction and the study of literature and describe and categorize the foci TR M of these courses. SCUs’ emphasis on process writing methodologies and D ED skills instruction is also explored. In reporting on these findings, I have identified characteristics significantly associated with institutional charac- R HT teristics, such as school size (pulled from the Carnegie Classification’s sys- IS tem), region, salient instructional practices, characteristics of the faculty, FO IG and administrative structures. My research reveals FYC as deeply embed- T R ded in SCUs, with course content often paradoxically at once in and out O Y of sync with the field’s recommendations for best practices. N OP In chapter 5, “Beyond First-Year Composition,” I discuss writing in the disciplines, general education writing, writing centers, and vertical writ- C ing programming to offer a view of how public colleges and universities are addressing writing beyond the first-year composition requirement. In this section I look at the scholarship on vertical writing programming from WAC to the development of minors and majors in writing studies, and I compare what I observed at SCUs to what is reported by other researchers. In addition, I report on the presence of writing centers and their locations, considering these findings against other studies, as well as on the debate over writing center fiscal and administrative location. In chapter 6, “Writing at the State Comprehensive U,” I summarize major findings of the study and address the future of writing instruction, administration, and support at SCUs. I offer strategies and suggestions for improving writing instruction and support at state comprehensive colleges and universities, identifying those areas most under faculty 16 T eaching , A dministering , and S upporting W riting control. In addition, I speculate on some roles national leaders and organizations such as NCTE might take on to think and act as a disci- pline, not as faculty at individual institutions. In appendix A, I provide a more detailed description of the meth- ods used to conduct this study and analysis. I provide a narrative of my method and share some of the dilemmas I addressed during the course of designing and implementing the study. Data collection is pinned to the fall 2011 semester. With very few exceptions, all catalogs and bul- letins cover that time period, and other publicly available documents from university and college websites were pulled during the period from September 1, 2011, to December 31, 2011. In addition, the sur- vey was sent out and completed during that same time period. Drawing O AL on both the experiences of other researchers who have shared their methods and my own experiences, I attempted to fashion a method TI I U ER that would get me a full bird’s-eye view: distanced, but wide. Not sur- N prisingly, my methodological choices resulted in many drawbacks or IB AT limitations, which I outline more fully in appendix A. Appendix B is a reproduction of the online survey completed by representatives at TR M ninety-two of the institutions in the sample. In appendix C, readers D ED will find the coding sheets I developed and relied upon. The final two appendices are lists: appendix D presents the variables as I had them R HT organized in SPSS (software for analyzing data), and appendix E lists IS all the institutions in the study. FO IG I hope that Writing at the State U will provide a baseline for other research- T R ers who advocate for particular writing-programming approaches and O Y analyze trends in the field. I especially hope this book will also be useful N OP to writing faculty at state comprehensive universities who wish to under- stand better the trends and possibilities at institutions like their own for C practical, local consideration and action. I imagine two different types of readers. First, writing program faculty and administrators who are in the midst of a crisis (a common state of affairs) and need information quickly should review the table of contents or index to find the section where such information is discussed and also consult the end-of-chapter statistical-summary boxes. Second, readers who are concerned about and interested in writing at the state comprehensive university will find in Writing at the State U a story about writing studies others who have sur- veyed the field have not yet told. In sum, readers will learn that while there are pockets of deep concern at SCUs, the evidence suggests that the practice of writing instruction and support has advanced signifi- cantly in concert with the field’s maturation. The state comprehensive university is a major part of the four-year higher education landscape Teaching, Administering, and Supporting Writing 17 and thus should be as well understood as the research university and the small liberal arts college. Notes 1. Project approved by the Montclair State University Internal Review Board. 2. The term state comprehensive university, and its acronym SCU, was selected over public regional university, which is also used frequently, in deference to Bruce B. Henderson of Western Carolina University, who has published extensively on SCUs and who founded a journal by this name (Henderson 2007). 3. The exact number depends on source data and definition; for example, the Carnegie Classification system and the AASCU have different numbers. 4. Enrollment paints a similar picture: the National Center for Educational Statistics reports that 7,709,197 students were enrolled in four-year public institutions in the O AL fall of 2009; AASCU reports that 3,800,000 students were enrolled in their member institutions (AASCU 2010) TI I 5. The National Center for Education Statistics, in a 2009 report, reports that 7,709,197 U ER N students were enrolled in four-year state institutions; the American Association of State Colleges and Universities claims 3.8 million students at their member institu- IB AT tions. AASCU is open to all state institutions, whereas the Association of Public and Land-Grant Universities (APLU) caters to public research universities only. TR M 6. In statistics, stratification is a method of sampling designed to improve represen- tativeness of a sample. More specifically, it refers to a method of sampling wherein D ED subgroups are created (in this case, by region and by size) and sampled first prior to randomly sampling the entire population. R HT IS FO IG T R O Y N OP C
US