AIOU Solved Assignments code B.ed 8604 Spring 2020 assignment 1 & 2 Course: Research Methods in Education (8604) spring 2020. AIOU past papers
ASSIGNMENT No. 1 & 2
Research Methods in Education (8604) B.ed 1.5 Years
AIOU Solved Assignments 1& 2 Code 8604 Spring 2020
Q.1 Why the research in the field of education is necessary? How does educational research help in the advancement of education? Illustrate your answer with examples. (20)
In epistemology, a common concern with respect to knowledge is what sources of information are capable of giving knowledge. The following are some of the major sources of knowledge: Perception — that which can be perceived through the experiences of the senses. The view that experience is the primary source of knowledge is called empiricism. Reason — Reason can be considered a source of knowledge, either by deducing truths from existing knowledge, or by learning things a priori, discovering necessary truths (such as mathematical truths) through pure reason. The view that reason is the primary source of knowledge is called rationalism Introspection — knowledge of one’s self that can be found through internal self-evalution. This is generally considered to be a sort of perception. (For example, I know I am hungry or tired.) Memory — Memory is the storage of knowledge that was learned in the past — whether it be past events or current information. Testimony — Testimony relies on others to acquire knowledge and communicate it to us. Some deny that testimony can be a source of knowledge, and insist that beliefs gained through testimony must be verified in order to be knowledge. Do we, the disciples of Jesus, possess, through Scripture and other means, a reliable source of knowledge of reality or do we not? The possession of knowledge — especially religious and moral knowledge — is essential for a life of flourishing. But to answer this question we must, first, answer another question: What exactly is knowledge and what does it mean to say Christian teaching provides it? Knowledge Defined OK, so let’s start with a definition. Knowledge is: to represent reality in thought or experience the way it really is on the basis of adequate grounds. Let me explain. To know something (the nature of cancer, forgiveness, God) is to think of or experience it as it really is on a solid basis of evidence, experience, intuition and so forth. Little can be said in general about what counts as “adequate grounds.” The best one can do is to start with specific cases of knowledge and its absence in, say, art, chemistry, memory, Scripture and logic, and formulate helpful descriptions of “adequate grounds” accordingly. “Adequate grounds” may be a variety of things, including — scientific evidence, widespread perception of beauty and excellence in a piece of art, clarity and vividness in recalling something in the past, and so forth. Different areas of knowledge will have different kinds of evidence relevant to those areas, and when the evidence reaches a certain level, it becomes adequate to provide the kind of support one needs to have knowledge of something. Again, little more can be said in general about what counts as “adequate.” Only by focusing carefully on particular cases can “adequate” be clarified. Three Important Clarifications about Knowledge Before we begin, please note three important things. First, knowledge has nothing to do with certainty or an anxious quest for it. On this view, one can know something without being certain about it. You can even know something while admitting that you may be wrong. Second, one can know something without knowing how one knows it. If one always has to know how one knows something before one can know it, one would also have to know how one knows how one knows something, and so on to infinity. (Get it?) Life is too short for such lengthy regresses, and thankfully, we often just know things without having any idea how we do. Thus, a person could know he or she has experienced the presence of God without being able to tell a skeptic how he/she knows this. When Christians claim to know this or that, they are not saying that they always know how they know the things they do. For example, many Christians have had experiences in which they knew that God was guiding them in a certain way, but they may not have been able to say exactly how they knew this. Now, it is often the case that some in the Christian community — for example, experts in New Testament studies or philosophy — do, in fact, know how we Christians know certain things. But it is not necessary for the average believer to have this information before they are within their rights to claim to know God is real and so forth. Finally, one can know without knowing that one knows. Consider Joe, an insecure yet dedicated high school student, who is about to take his history final. He has studied thoroughly and knows the material, but when a friend asks him if he is prepared for the test, he says, “no.” In this case, Joe actually knows the material, but he doesn’t know he knows it. Thus, he lacks confidence. Today, cultural elites in the media and university tell us that we cannot know that God is real, and so on. As a result, while many Christians actually do know various things relevant to Christianity, they lack confidence because they do not know that they have this knowledge. Three Kinds and sources of Knowledge In addition to these three observations about knowledge, there are three different kinds of knowledge: Knowledge by acquaintance: This happens when we are directly aware of something; e.g., when I see an apple directly before me or pay attention to my inner feelings, I know these things by acquaintance. One does not need a concept of an apple or knowledge of how to use the word “apple” in English to have knowledge by acquaintance with an apple. A baby can see and recognize an apple without having the relevant concept or linguistic skills. Knowledge by acquaintance is sometimes called “simple seeing” — being directly aware of something. In the case of the Christian, we sometimes know God by directly experiencing His presence, forgiveness, grace and so on. Propositional knowledge: This is knowledge that an entire proposition is true. For example, knowledge that “the object there is an apple” requires having a concept of an apple and knowing that the object under consideration satisfies the concept. Propositional knowledge is justified true belief; it is believing something that is true on the basis of adequate grounds. For example, the Bible is the Christian’s ultimate, final source of propositional knowledge about the doctrines of Christianity. Know-how: This is the ability to do certain things; e.g., to use apples for certain purposes. We may distinguish mere know-how from genuine know-how or skill. The latter is know-how based on knowledge and insight and is characteristic of skilled practitioners in some field. Mere know-how is the ability to engage in the correct behavioral movements, say, by following the steps in a manual, with little or no knowledge of why one is performing these movements. In Christianity, biblical know-how that is directed at living life well is called wisdom. Focusing on Knowledge By Acquaintance One can think of a tree, God, or whether or not one is angry, but these are all different from directly being aware of the tree, God, or one’s inner state of anger. Knowledge by acquaintance is an important foundation for all knowledge. Also, in an important sense, experience or direct awareness of reality is the basis for everything we know. One should not limit what one can see or directly be aware of to the five senses. One can also be directly aware of one’s own soul and inner states of thoughts, feelings, desires, beliefs and so forth by introspective awareness of one’s inner life. One can be directly aware of God and His presence in religious experience, of His speaking to one in guidance, or of the Spirit’s testimony to various things. From Plato to the present, many philosophers have believed (correctly in my view) in what is called rational awareness — the soul’s ability to directly be aware of aesthetic and moral values, numbers and the laws of mathematics, the laws of logic, and various abstract objects such as humanness and wisdom. The important thing to note is that we humans have the power to “see” — to be directly aware of, to directly experience — a wide range of things, many of which are not subject to sensory awareness with the five senses. Seeing As and Seeing That To “simply see” an apple is to be directly aware of it. To see something as an apple requires that one has acquired the concept of being an apple (perhaps from repeated exposure to simply seeing apples) and applies it to the object before oneself. To see that an object is an apple, one must have the entire thought in one’s mind, “The object before me is an apple” and judge that the object genuinely corresponds to that thought. Given the reality and nature of knowledge by acquaintance, it follows that knowledge does not begin with presuppositions, language, concepts, one’s cultural standpoint, worldview or anything else, as is so often purported by postmodern points of view. It starts with awareness of reality. Seeing as and seeing that do require that one has presuppositions, concepts, and so forth. One’s presuppositions and so forth will influence how we see things as such and such (e.g., as a healing from God), and one’s worldview will influence our seeing that or judging that (e.g., seeing/judging that this event is a miraculous healing). And because we have direct acquaintance with the world itself prior to seeing as (applying a concept to something) or seeing that (judging that an entire proposition is true), we can compare the way we see things or judge things with the things themselves and thereby we can adjust our worldview. For example, because we actually see the person get well, we can verify or disconfirm that we are right to see the event as or judge that it was a miracle from God. Knowledge by acquaintance gives us direct access to reality as it is in itself, and we actually know this to be the case in our daily lives. In closing, you ponder the three sorts of knowledge we have discussed and reflect on how it is that you have each sort in relation to your Christian faith. As you do, please keep in mind the three clarifications about knowledge to ensure that you reflect accurately on the role that knowledge plays in your own Christian pilgrimage.
AIOU Solved Assignments 1& 2 Code 8604 Spring 2020
Q.2 What precautions should be taken in an experimental research in education to obtain reliable results? (20)
One important measure of the quality of a scientific experiment: validity. Learn what validity is and how it can be improved in a scientific investigation.
Non-Experimental & Experimental Research
Alright! It’s time to learn something using research by … performing a non-experimental study? Wait, wait, wait! Is it possible to have a non-experimental study? Is that sort of like sugar free candy? Is it something that you’re supposed to have that is replaced by something that makes you scratch your head? Before we discuss research designs, though, you need a brief walkthrough of some of the terms I am going to throw at you. A predictor variable is the portion of the experiment that is being manipulated to see if it has an effect on the dependent variable. For example, do people eat more Gouda or cheddar cheese? The predictor variable in this is the type of cheese. Now, every time you eat cheese, you’ll think about predictor variables. When I say subjects, I just mean the people in the experiment or the people being studied. Experimental research is when a researcher is able to manipulate the predictor variable and subjects to identify a cause-and-effect relationship. This typically requires the research to be conducted in a lab, with one group being placed in an experimental group, or the ones being manipulated, while the other is placed in a placebo group, or inert condition or non-manipulated group. A laboratory-based experiment gives a high level of control and reliability. Non-experimental research is the label given to a study when a researcher cannot control, manipulate or alter the predictor variable or subjects, but instead, relies on interpretation, observation or interactions to come to a conclusion. Typically, this means the non-experimental researcher must rely on correlations, surveys or case studies, and cannot demonstrate a true cause-and-effect relationship. Non-experimental research tends to have a high level of external validity, meaning it can be generalized to a larger population.
So, now that we have the basics of what they are, we can see some of the differences between them. Obviously, the first thing is the very basis of what they are looking at: their methodology. Experimental researchers are capable of performing experiments on people and manipulating the predictor variables. Non-experimental researchers are forced to observe and interpret what they are looking at. Being able to manipulate and control something leads to the next big difference. The ability to find a cause-and-effect relationship is kind of a big deal in the world of science! Being able to say X causes Y is something that has a lot of power. While non-experimental research can come close, non-experimental researchers cannot say with absolute certainty that X leads to Y. This is because there may be something it did not observe, and it must rely on less direct ways to measure. For example, let’s say we’re curious about how violent men and women are. We cannot have a true experimental study because our predictor variable for violence is gender. To have a true experimental study we would need to be able to manipulate the predictor variable. If we had a way to switch men into women and women into men, back and forth, so that we could see which gender is more violent, then we could run a true experimental study. But, we can’t do that. So, our little experiment becomes a non-experimental study because we cannot manipulate our predictor variable.
Pros & Cons of Non-Experimental Research
There appears to be only disadvantages to non-experimental research. It cannot find cause-and-effect relationships, cannot manipulate predictor variables and the methods of study are often correlation or case studies. There are clear cut disadvantages to non-experimental designs. However, non-experimental research does have at least some advantages over experimental design. A non-experimental study picks up the slack from an experimental design. As discussed earlier, to study the effects of gender, you have to be able to manipulate a person’s gender. Other examples of non-experimental research include predictor variables like:
- Prison sentences (real prisoners, not like Zimbardo’s students)
- Current opinions
If you can’t manipulate it, then you can’t run an experimental study. However, non-experimental researchers are able to take the variables that cannot be manipulated and controlled. The non-experimental design can study and examine questions experimental researchers cannot.
Validity in Science
Although science is the best way to come up with accurate explanations for how the world works, not all scientific investigations are created equal. Some are better than others. There are a couple of ways of measuring how good a scientific investigation is. Two terms that are often used are reliability and validity. Reliability is a measure of how repeatable an experiment is – were the results are similar when the experiment was carried out multiple times. But perhaps more important than this is validity, which is a measure of how correct the results of an experiment are. A particular experiment or investigation can be internally valid and externally valid. Internal validity is about whether the design of the experiment follows the standard steps of the scientific method, and whether the process followed by the experiment makes logical sense. External validity is about figuring out whether the conclusion from the experiment is the real explanation for the phenomenon in the wider world. It examines things like whether there might be an alternative explanation for the result. If your goal is to make your result as close to the truth about the world as possible, then you need to improve your validity as much as you can. Most scientists are pretty successful making their experiments internally valid, but external validity can be harder to achieve. In this lesson, we going to take a look at a few ways you can improve the validity of your experiments.
There are a number of ways of improving the validity of an experiment, including controlling more variables, improving measurement technique, increasing randomization to reduce sample bias, blinding the experiment, and adding control or placebo groups. Controlling more variables is about making sure as few things as possible change during the experiment. An ideal experiment is where one thing is changed, and one result is looked at. Everything else should remain the same. So for example, if you wanted to know how fast balls of different masses will roll down a particular hill, you would change the mass of the ball, and keep everything else the same. You’d keep the material of the ball, the point of release, the measurement location a method, the humidity of the air, the height above sea level and anything else you can think of the same. The more you keep the same, the more likely it is that your result will be valid. Measurement technique can also be improved. Perhaps instead of measuring something by hand, you could use a computer and electronic sensor. Or perhaps instead of having one person measure the results, you could have multiple people take measurements and compare their answers. Increasing randomization is a way to reduce a particular validity problem: sample bias. That’s when the samples being investigated are not a representative sample of the population. For example, say your testing for the effect of a drug and your trials contain mostly white males between 20 and 30 years old. That would not be a good sample due to the lack of breadth and age and gender. Or perhaps you are testing weight loss drugs on people who are of a healthy weight already. Increasing the randomization of the sample will reduce this problem.
Scientific Research Critique:
A great way to learn new concepts related to scientific experimentation is to examine those concepts in relation to published work. In this activity, students will review peer-reviewed, published scientific experiments to determine what methods were used to ensure validity and what could have been done differently to improve validity.
After reviewing the lesson, conduct a brief discussion with your students on the concepts learned in the lesson. Be sure to review the definitions of the main terms in the lesson:
- Internal validity
- External validity
- Controlling variables
When you are confident that your students have a strong understanding of these terms related to study validity, instruct them to select one scientific research article through research on databases designed for peer-reviewed scientific research or published journal articles.
- Peer-reviewed articles are going to offer the best example to your students on how to correctly conduct research.
- If you would like your students to have more opportunity for strong critiquing, allow them to choose research published in non-peer reviewed journals or magazines.
- Now, instruct your students to focus on the validity of the study they have chosen. They should:
- Review the article for evidence that each type of validity has been designed into the study.
- Document every aspect of the study that adds to the strength of the validity of the whole study.
Finally, your students should write a brief summary of any suggestions they may have that would have increased the validity of the study in question. They should:
- Indicate what type of validity their suggestion would impact.
- Why the suggestion would increase the validity of the study.
- Why they think the original study did not already include their suggestion.
AIOU Solved Assignments 1& 2 Code 8604 Spring 2020
Q.3 Distinguish external criticism from internal one in historical research. Give example where necessary. (20)
Most research involves looking at what’s happening right now. But what if a researcher wants to look at the past and what it can tell us about the future? In this lesson, we’ll explore historical research design, its steps, and its pros and cons.
Stan’s parents survived the Holocaust and immigrated to the United States, where he was born and raised. He grew up hearing stories about the concentration camps and the horrible things done to people who were not accepted by the Nazi party. More than once, when telling stories about the camps, Stan’s mom would tear up and ask, ‘Why? Why did they do that?’ It’s a question that has haunted Stan for most of his life. Why did the Nazis take millions of people out of their homes, torture them, and then kill them? Stan wonders if the answer to that question could help prevent genocide in the future. He’s passionate about finding the answer. Stan is a psychologist, and he has always done research that involves numbers. He looks at averages and percentages and tries to figure out how people act in a lab. But he’s starting to wonder if that’s the best way to attack his mother’s question. Instead, he thinks maybe he should focus on qualitative research, which involves examining non-numerical data. There are many ways to gather qualitative data. Let’s look at one type of qualitative research closer, that of historical design, and its strengths and limitations.
So, Stan decides that he wants to figure out why the Nazis acted the way they did. He wants to do historical research, which involves interpreting past events to predict future ones. In Stan’s case, he’s interested in examining the reasons behind the Holocaust to try to prevent it from happening again. Historical research design involves synthesizing data from many different sources. Stan could interview former Nazis or read diaries from Nazi soldiers to try to figure out what motivated them. He could look at public records and archives, examine Nazi propaganda, or look at testimony in the trials of Nazi officers. There are several steps that someone like Stan has to go through to do historical research:
- Formulate an idea: This is the first step of any research, to find the idea and figure out the research question. For Stan, this came from his mother, but it could come from anywhere. Many researchers find that ideas and questions arise when they read other people’s research.
- Formulate a plan: This step involves figuring out where to find sources and how to approach them. Stan could make a list of all the places he could find information (libraries, court archives, private collections) and then figure out where to start.
- Gather data: This is when Stan will actually go to the library or courthouse or prison to read or interview or otherwise gather data. In this step, he’s not making any decisions or trying to answer his question directly; he’s just trying to get everything he can that relates to the question.
- Analyze data: This step is when Stan goes through the data he just collected and tries more directly to answer his question. He’ll look for patterns in the data. Perhaps he reads in the diary of the daughter of a Nazi that her father didn’t believe in the Nazi party beliefs but was scared to stand up for his values. Then he hears the same thing from a Nazi soldier he interviews. A pattern is starting to emerge.
- Analyze the sources of data: Another thing that Stan has to do when he is analyzing data is to also analyze the veracity of his data. The daughter’s diary is a secondary source, so it might not be as true as a primary source, like the diary of her father. Likewise, people have biases and motivations that might cloud their account of things; perhaps the Nazi soldier Stan interviews is up for parole, and he thinks that if he says he was scared and not a true Nazi believer, he might get out of jail.
External Criticism – Asks if the evidence under consideration is authentic. The researcher checks the genuineness or validity of the source. Is it what it appears or claims to be? Is it admissible as evidence? Internal Criticism – After the source is authenticated, it asks if the source is accurate, was the writer or creator competent, honest, and unbiased? How long after the event happened until it was reported? Does the witness agree with other witnesses? Establishing the Genuineness of a Document of Relic
- Does the language and writing style conform to the period in question and is it typical of other work done by the author?
- Is there evidence that the author exhibits ignorance of things or events that man of his training and time should have known?
- Did he report about things, events, or places that could not have been known during that period?
- Has the original manuscript been altered either intentionally or unintentionally by copying?
- Is the document an original draft or a copy? If it is a copy, was it reproduced in the exact words of the original?
- If manuscript is undated or the author unknown, are there any clues internally as to its origin?
Checking the Content of a Source of Information
- What was meant by the author by each word and statement?
- How much credibility can the author’s statements be given?
AIOU Solved Assignments 1& 2 Code 8604 Spring 2020
Q.4 Distinguish experimental research from non-experimental research. What are different experimental designs that can be used to address the educational issues? (20)
Suppose teachers wished to determine which of two methods of reading instruction was most effective—one that involved 20 minutes of direct instruction in phonics each day throughout the academic year in grade 1 or one that involved the current practice of having the teacher read a book to the class for 20 minutes each day throughout the year in grade 1. Similarly, suppose they wished to determine whether children learn better in a small class (i.e., with 15 students) or a large class (i.e., with 30 students). Finally, suppose they wished to determine whether requiring students to take a short quiz during each meeting of a college lecture class would result in better performance on the final exam than not giving quizzes. Each of these situations can be examined best by using experimental research methodology in which investigators compare the mean performance of two or more groups on an appropriate test. In experimental research, it is customary to distinguish between the independent variable and the dependent measure. The independent variable is the feature that is different between the groups—for example, whether 20 minutes of time each day is used for phonics instruction or reading aloud to students, whether the class size is small or large, or whether a short quiz is given during each class meeting. The dependent measure is the score that is used to compare the performance of the groups—for example, the score on a reading test administered at the end of the year, the change in performance on academic tests from the beginning of the year to the end of the year, or the score on a final exam in the class. When researchers compare two or more groups on one or more measures, they use experimental research methodology. EXPERIMENTAL RESEARCH DEFINED Experimental research is based on a methodology that meets three criteria: (a) random assignment—the subjects (or other entities) are randomly assigned to treatment groups, (b) experimental control—all features of the treatments are identical except for the independent variable (i.e., the feature being tested), and (c) appropriate measures—the dependent measures are appropriate for testing the research hypothesis. For example, in the class size example, random assignment involves finding a group of students and randomly choosing some to be in small classes (i.e, consisting of 15 students) and some to be in large classes (i.e., consisting of 30 students). The researcher cannot use pre-existing small or large classes because doing so would violate the criterion of random assignment. The problem with violating random assignment is that the groups may systemically differ; for example, students in the smaller classes may be at more wealthy schools that also have more resources, better teachers, and better-prepared students. This violation of the random assignment criterion, sometimes called self-selection, is a serious methodological flaw in experimental research. In the class size example, the criterion of experimental control is reflected in having the classes equivalent on all relevant features except class size. That is, large and small classes should have teachers who are equivalent in teaching skill, students who are equivalent in academic ability, and classrooms that are physically equivalent; they should also have equivalence in support services, length of school day, percentages based on gender, English language proficiency, ethnicity, and so on. If the groups differ on an important variable other than class size, determining whether differences in test performance can be attributed to class size will be difficult. This violation of the experimental control criterion, called confounding, is a serious methodological flaw in experimental research. Dependent measure Finally, in the class size example, the dependent measure should test the research hypothesis that class size affects academic learning, so an appropriate measure would be to give an achievement test covering the curriculum at the start and end of the year. The appropriate measures criterion would be violated if the dependent measure were a survey asking students how well they enjoyed school this year or an ungraded portfolio of their artwork over the year. When a test does not measure what is intended, the test lacks validity; invalid tests represent a serious methodological flaw in experimental research. BENEFITS AND LIMITATIONS OF EXPERIMENTAL RESEARCH Experimental research is generally recognized as the most appropriate method for drawing causal conclusions about instructional interventions, for example, which instructional method is most effective for which type of student under which conditions. In a careful analysis of educational research methods, Richard Shavelson and Lisa Towne concluded that “from a scientific perspective, randomized trials (we also use the term experiment to refer to causal studies that feature random assignment) are the ideal for establishing whether one or more factors caused change in an outcome because of their strong ability to enable fair comparisons”. Similarly, Richard Mayer notes: “experimental methods— which involve random assignment to treatments and control of extraneous variables—have been the gold standard for educational psychology since the field evolved in the early 1900s”. Mayer states, “when properly implemented, they allow for drawing causal conclusions, such as the conclusion that a particular instructional method causes better learning outcomes” (p. 75). Overall, if one wants to determine whether a particular instructional intervention causes an improvement in student learning, then one should use experimental research methodology. Although experiments are widely recognized as the method of choice for determining the effects of an instructional intervention, they are subject to limitations involving method and theory. First, concerning method, the requirements for random assignment, experiment control, and appropriate measures can impose artificiality on the situation. Perfectly controlled conditions are generally not possible in authentic educational environments such as schools. Thus, there may be a tradeoff between experimental rigor and practical authenticity, in which highly controlled experiments may be too far removed from real classroom contexts. Experimental researchers shouldbe sensitivetothis limitation, by incorporating mitigating features in their experiments that maintain ecological validity. Second, concerning theory, experimental research may be able to tell that one method of instruction is better than conventional practice, but may not be able to specify why; it may not be able to pinpoint the mechanisms that create the improvement. In these cases, it is useful to derive clear predictions from competing theories so experimental research can be used to test the specific predictions of competing theories. In addition, more focused research methods—such as naturalistic observation or in-depth interviews—may provide richer data that allows for the development of a detailed explanation for why an intervention might have a new effect. Experimental researchers should be sensitive to this limitation, by using complementary methods in addition to experiments that provide new kinds of evidence. EXPERIMENTAL DESIGNS Three common research designs used in experimental research are between subjects, within subjects, and factorial designs. In between-subjects designs, subjects are assigned to one of two (or more) groups with each group constituting a specific treatment. For example, in a between-subjects design, students may be assigned to spend two school years in a small class or a large class. In within-subjects designs, the same subject receives two (or more) treatments. For example, students may be assigned to a small class for one year and a large class for the next year, or vice versa. Within-subjects designs are problematic when experience with one treatment may spill over and affect the subject’s experience in the following treatment, as would likely be the case with the class size example. In factorial designs, groups are based on two (or more) factors, such as one factor being large or small class size and another factor being whether the subject is a boy or girl, which yields four cells (corresponding to four groups). In a factorial design it is possible to test for main effects, such as whether class size affects learning, and interactions, such as whether class size has equivalent effects for boys and girls. RANDOMIZED TRIALS IN EDUCATIONAL RESEARCH Experimental research helps test and possibly provide evidence on which to base a causal relationship between factors. In the late 1940s, Ronald A. Fisher (1890–1962) of England began testing hypotheses on crops by dividing them into groups that were similar in composition and treatment to isolate certain effects on the crops. Soon he and others began refining the same principles for use in human research. To ensure that groups are similar when testing variables, researchers began using randomization. By randomly placing subjects into groups that say, receive a treatment or receive a placebo, researchers help ensure that participants with the same features do not cluster into one group. The larger the study groups, the more likely randomization will produce groups approximately equal on relevant characteristics. Nonrandomized trials and smaller participant groups produce greater chance for bias in group formation. In education research, these experiments also involve randomly assigning participants to an experimental group and at least one control group. The Elementary and Secondary Education Act (ESEA) of 2001 and the Educational Sciences Reform Act (ERSA) of 2002 both established clear policies from the federal government concerning a preference for “scientifically based research.” A federal emphasis on the use of randomized trials in educational research is reflected in the fact that 70% of the studies funded by the Institute of Education Sciences in 2001 were to employ randomized designs. The federal government and other sources say that the field of education lags behind other fields in use of randomized trials to determine effectiveness of methods. Critics of experimental research say that the time involved in designing, conducting, and publishing the trials makes them less effective than qualitative research. Frederick Erickson and Kris Gutierrez of the University of California, Los Angeles argued that comparing educational research to the medical failed to consider social facts, as well as possible side effects. Evidence-based research aims to bring scientific authority to all specialties of behavioral and clinical medicine. However, the effectiveness of clinical trials can be marred by bias from financial interests and other biases, as evidenced in recent medical trials. In a 2002 Hastings Center Report, physicians Jason Klein and Albert Fleischman of the Albert Einstein College of Medicine argued that financial incentives to physicians should be limited. In 2007 many drug companies and physicians were under scrutiny for financial incentives and full disclosure of clinical trial results.
AIOU Solved Assignments 1& 2 Code 8604 Spring 2020
Q.5 State various types of descriptive researches that can be used at different stages of planning cycle. (20)
Sometimes an individual wants to know something about a group of people. Maybe the individual is a would-be senator and wants to know who they’re representing or a surveyor who is looking to see if there is a need for a mental health program. Descriptive research is a study designed to depict the participants in an accurate way. More simply put, descriptive research is all about describing people who take part in the study. There are three ways a researcher can go about doing a descriptive research project, and they are:
- Observational, defined as a method of viewing and recording the participants
- Case study, defined as an in-depth study of an individual or group of individuals
- Survey, defined as a brief interview or discussion with an individual about a specific topic
Let’s look at specific ways we can use each of these.
If I say, ‘chimpanzees,’ what do you think? Okay, after you think of bananas. Okay, after you remember that their babies are adorable. Yes! Jane Goodall – the researcher who spent years observing chimpanzees in the wild. Observational studies are all about watching people, and they come in two flavors. Naturalistic, also known as field observation, is a study where a researcher observes the subject in its natural environment. This is basically what Jane Goodall did; she observed the chimpanzees in their natural environment and drew conclusions from this. This makes the observations more true to what happens in the chaotic, natural world. But, it also means you have less control over what happens. The other flavor is laboratory observation, where a researcher observes the subject in a laboratory setting. This gives the researcher a little more control over what happens so they don’t have to fly out to some tiny little island in the middle of a war zone to observe something. However, it does ruin some of the naturalness that one might get from field observation. An example of a laboratory observation in psychology would be done to understand something about children at a certain age, such as the process of how a child learns to speak and mimic sounds. Descriptive research does not fit neatly into the definition of either quantitative or qualitative research methodologies, but instead it can utilize elements of both, often within the same study. The term descriptive research refers to the type of research question, design, and data analysis that will be applied to a given topic. Descriptive statistics tell what is, while inferential statistics try to determine cause and effect. The type of question asked by the researcher will ultimately determine the type of approach necessary to complete an accurate assessment of the topic at hand. Descriptive studies, primarily concerned with finding out “what is,” might be applied to investigate the following questions: Do teachers hold favorable attitudes toward using computers in schools? What kinds of activities that involve technology occur in sixth-grade classrooms and how frequently do they occur? What have been the reactions of school administrators to technological innovations in teaching the social sciences? How have high school computing courses changed over the last 10 years? How do the new multimediated textbooks compare to the print-based textbooks? How are decisions being made about using Channel One in schools, and for those schools that choose to use it, how is Channel One being implemented? What is the best way to provide access to computer equipment in schools? How should instructional designers improve software design to make the software more appealing to students? To what degree are special-education teachers well versed concerning assistive technology? Is there a relationship between experience with multimedia computers and problem-solving skills? How successful is a certain satellite-delivered Spanish course in terms of motivational value and academic achievement? Do teachers actually implement technology in the way they perceive? How many people use the AECT gopher server, and what do they use if for? Collections of quantitative information Descriptive research can be either quantitative or qualitative. It can involve collections of quantitative information that can be tabulated along a continuum in numerical form, such as scores on a test or the number of times a person chooses to use a-certain feature of a multimedia program, or it can describe categories of information such as gender or patterns of interaction when using technology in a group situation. Descriptive research involves gathering data that describe events and then organizes, tabulates, depicts, and describes the data collection (Glass & Hopkins, 1984). It often uses visual aids such as graphs and charts to aid the reader in understanding the data distribution. Because the human mind cannot extract the full import of a large mass of raw data, descriptive statistics are very important in reducing the data to manageable form. When in-depth, narrative descriptions of small numbers of cases are involved, the research uses description as a tool to organize data into patterns that emerge during analysis. Those patterns aid the mind in comprehending a qualitative study and its implications. Most quantitative research falls into two areas: studies that describe events and studies aimed at discovering inferences or causal relationships. Descriptive studies are aimed at finding out “what is,” so observational and survey methods are frequently used to collect descriptive data (Borg & Gall, 1989). Studies of this type might describe the current state of multimedia usage in schools or patterns of activity resulting from group work at the computer. An example of this is Cochenour, Hakes, and Neal’s (1994) study of trends in compressed video applications with education and the private sector. Descriptive studies report summary data such as measures of central tendency including the mean, median, mode, deviance from the mean, variation, percentage, and correlation between variables. Survey research commonly includes that type of measurement, but often goes beyond the descriptive statistics in order to draw inferences. See, for example, Signer’s (1991) survey of computer-assisted instruction and at-risk students, or Nolan, McKinnon, and Soler’s (1992) research on achieving equitable access to school computers. Thick, rich descriptions of phenomena can also emerge from qualitative studies, case studies, observational studies, interviews, and portfolio assessments. Robinson’s (1994) case study of a televised news program in classrooms and Lee’s (1994) case study about identifying values concerning school restructuring are excellent examples of case studies. Descriptive research is unique in the number of variables employed. Like other types of research, descriptive research can include multiple variables for analysis, yet unlike other methods, it requires only one variable (Borg & Gall, 1989). For example, a descriptive study might employ methods of analyzing correlations between multiple variables by using tests such as Pearson’s Product Moment correlation, regression, or multiple regression analysis. Good examples of this are the Knupfer and Hayes (1994) study about the effects of the Channel One broadcast on knowledge of current events, Manaev’s (1991) study about mass media effectiveness, McKenna’s (1993) study of the relationship between attributes of a radio program and it’s appeal to listeners, Orey and Nelson’s (1994) examination of learner interactions with hypermedia environments, and Shapiro’s (1991) study of memory and decision processes. Descriptive statistics utilize data collection and analysis techniques that yield reports concerning the measures of central tendency, variation, and correlation. The combination of its characteristic summary and correlational statistics, along with its focus on specific types of research questions, methods, and outcomes is what distinguishes descriptive research from other research types. Main purposes of research Three main purposes of research are to describe, explain, and validate findings. Description emerges following creative exploration, and serves to organize the findings in order to fit them with explanations, and then test or validate those explanations (Krathwohl, 1993). Many research studies call for the description of natural or man-made phenomena such as their form, structure, activity, change over time, relation to other phenomena, and so on. The description often illuminates knowledge that we might not otherwise notice or even encounter. Several important scientific discoveries as well as anthropological information about events outside of our common experiences have resulted from making such descriptions. For example, astronomers use their telescopes to develop descriptions of different parts of the universe, anthropologists describe life events of socially atypical situations or cultures uniquely different from our own, and educational researchers describe activities within classrooms concerning the implementation of technology. This process sometimes results in the discovery of stars and stellar events, new knowledge about value systems or practices of other cultures, or even the reality of classroom life as new technologies are implemented within schools. Educational researchers might use observational, survey, and interview techniques to collect data about group dynamics during computer-based activities. These data could then be used to recommend specific strategies for implementing computers or improving teaching strategies. Two excellent studies concerning the role of collaborative groups were conducted by Webb (1982), and Rysavy and Sales (1991). Noreen Webb’s landmark study used descriptive research techniques to investigate collaborative groups as they worked within classrooms. Rysavy and Sales also apply a descriptive approach to study the role of group collaboration for working at computers. The Rysavy and Sales approach did not observe students in classrooms, but reported certain common findings that emerged through a literature search. Descriptive studies have an important role in educational research. They have greatly increased our knowledge about what happens in schools. Some of the important books in education have reported studies of this type: Life in Classrooms, by Philip Jackson; The Good High School, by Sara Lawrence Lightfoot; Teachers and Machines: The Classroom Use of Technology Since 1920, by Larry Cuban; A Place Called School, by John Goodlad; Visual Literacy: A Spectrum of Learning, by D. M. Moore and Dwyer; Computers in Education: Social, Political, and Historical Perspectives, by Muffoletto and Knupfer; and Contemporary Issues in American Distance Education, by M. G. Moore. The Nature of Descriptive Research The descriptive function of research is heavily dependent on instrumentation for measurement and observation (Borg & Gall, 1989). Researchers may work for many years to perfect such instrumentation so that the resulting measurement will be accurate, reliable, and generalizable. Instruments such as the electron microscope, standardized tests for various purposes, the United States census, Michael Simonson’s questionnaires about computer usage, and scores of thoroughly validated questionnaires are examples of some instruments that yield valuable descriptive data. Once the instruments are developed, they can be used to describe phenomena of interest to the researchers. The intent of some descriptive research is to produce statistical information about aspects of education that interests policy makers and educators. The National Center for Education Statistics specializes in this kind of research. Many of its findings are published in an annual volume called Digest of Educational Statistics. The center also administers the National Assessment of Educational Progress (NAEP), which collects descriptive information about how well the nation’s youth are doing in various subject areas. A typical NAEP publication is The Reading Report Card, which provides descriptive information about the reading achievement of junior high and high school students during the past 2 decades. Evaluation of Education Achievement On a larger scale, the International Association for the Evaluation of Education Achievement (IEA) has done major descriptive studies comparing the academic achievement levels of students in many different nations, including the United States (Borg & Gall, 1989). Within the United States, huge amounts of information are being gathered continuously by the Office of Technology Assessment, which influences policy concerning technology in education. As a way of offering guidance about the potential of technologies for distance education, that office has published a book called Linking for Learning: A New Course for Education, which offers descriptions of distance education and its potential. There has been an ongoing debate among researchers about the value of quantitative (see 40.1.2) versus qualitative research, and certain remarks have targeted descriptive research as being less pure than traditional experimental, quantitative designs. Rumors abound that young researchers must conduct quantitative research in order to get published in Educational Technology Research and Development and other prestigious journals in the field. One camp argues the benefits of a scientific approach to educational research, thus preferring the experimental, quantitative approach, while the other camp posits the need to recognize the unique human side of educational research questions and thus prefers to use qualitative research methodology. Because descriptive research spans both quantitative and qualitative methodologies, it brings the ability to describe events in greater or less depth as needed, to focus on various elements of different research techniques, and to engage quantitative statistics to organize information in meaningful ways. The citations within this chapter provide ample evidence that descriptive research can indeed be published in prestigious journals. Natural or man-made educational phenomena Descriptive studies can yield rich data that lead to important recommendations. For example, Galloway (1992) bases recommendations for teaching with computer analogies on descriptive data, and Wehrs (1992) draws reasonable conclusions about using expert systems to support academic advising. On the other hand, descriptive research can be misused by those who do not understand its purpose and limitations. For example, one cannot try to draw conclusions that show cause and effect, because that is beyond the bounds of the statistics employed. Description, prediction, improvement, and explanation Borg and Gall (1989) classify the outcomes of educational research into the four categories of description, prediction, improvement, and explanation. They say that descriptive research describes natural or man-made educational phenomena that is of interest to policy makers and educators. Predictions of educational phenomenon seek to determine whether certain students are at risk and if teachers should use different techniques to instruct them. Research about improvement asks whether a certain technique does something to help students learn better and whether certain interventions can improve student learning by applying causal-comparative, correlation, and experimental methods. The final category of explanation posits that research is able to explain a set of phenomena that leads to our ability to describe, predict, and control the phenomena with a high level of certainty and accuracy. This usually takes the form of theories. The methods of collecting data for descriptive research can be employed singly or in various combinations, depending on the research questions at hand. Descriptive research often calls upon quasi-experimental research design (Campbell & Stanley, 1963). Some of the common data collection methods applied to questions within the realm of descriptive research include surveys, interviews, observations, and portfolios.