Calculating a Valid Summary of Data in Quantitative Research
1. In the current equation, we can see the calculations based on the averaging of the Likert Data. Considering the fact that 17 percent of the respondents are unsure. This median of 3.19 cannot be justified because of it bents the equation toward the solution that supports the bracket of “strong agreement “for the statement. We cannot undermine the fact that the opinion of the 17 percent of people (not sure) should be treated as a very strong variable in this case.
The IQR technique can act as a tool to check the impact of the variable
In order to nullify this unwarranted variable, we can use the technique of finding an Inter-quartile range (IQR). IQR range can act as a confirmatory value on the validity of 3.19. IQR is the difference between 75th and 25th percentiles between the upper and the lower quartiles (Kumar, 2015). If the value of IQR is one or two then the current value is closer to the interpretation that it is a strong opinion. If the IQR is in the range of three and four then it shows that the current finding is not giving us the right picture or the right value.
In the above-mentioned case, the polling company is adopting a quantitative method to research. Further, it can be termed as the stratified sampling where geographical region and time bands are functioning as the agent to minimize the variables. In the current case, the corner of the street is the spatial strata or the strata of the geography and time band of 9-12 is another variable to keep a check on the excessive randomization of the survey result. Many experts also term it as a stratified cluster. However, the theories connected to the cluster are not applicable here because we don’t have a dedicated list or a list closer to a dedicated list to fix a sampling frame (Thompson, 2012).
The Pros of Stratified Sampling in the current case
- A stratified sample based on the geographical area presents a clear subdivision of the population of the city. The method of Random sampling further brings down the biasness of the result by ensuring a maximum representation of all the possible subgroups based on gender, caste, and creed.
- This type of data collection also allows calculating the mean of each subgroup differently. Sometimes these individual values can play an important role when we deal with one-sided results. For instance, the trends coming after calculating the mean of the larger picture gives us an indication of the victory for a particular party. The mean of the subgroups where the winning party is not performing well can give us an indication about the margin of the victory when we treat the data separately.
- The disproportionate allocation of the sampling frame will introduce a low standard deviation when we will calculate the mean for a large amount of data (Lavy, 2013). This is an advantage that will help us in bringing down the biases ratio. In the common parlance, we can also say that disproportionate allocation will ensure the representation of a maximum number of subgroups by default because we are not setting any boundaries.
The cons of Stratified sampling in the current case
- If the prima facie findings of the processing of the research data show mix results then it can also give rise to Simpson’s paradox, where the impact of the variance may go on the higher side and give us two contrasting pictures (S, 2015). There are two ways to process; first, we can find a sum of all the results obtained from the subgroups. It will give us a result connected to the predictions, in the second method we can randomly mix the data of the subgroups in a wider picture and apply the same set of calculations that we did with the subgroups. In the second case sometimes the findings may vary and show exactly the opposite results (Gupta, 2012).
- In the current system, we cannot see any means to keep a check on the overlapping of the feedbacks. In spite of an access to a large number of data, sometimes we can find it difficult to find similar fields that can help us in corroborating our findings.
Conclusion
In the current data collection model, means to prevent this overlapping are missing. It is also a thumb rule that stratification of the data often needs a secondary layer of the research of action research to come up with exact findings. In the absence of the means to keep a check on the overlapping of the data, the results may vary and show a wrong picture of the likely outcome.
The Limitations of Using Quantitative Research for Data Collection
3.
a. Cars passing through a signal in a given hour (whole numbers) is an interval data. An interval data can be compared with the help of numbers. In this case, we can compare the data internally as well as externally. Internally we can find out that which five minutes of the hour was the busiest. We can also differentiate between various time intervals. If we have two samples with us then again we can compare them as well. It means that the change from 1 to 2 is equal to the change from 7 to 8.
b. Kelvin Thermometers represents a Ratio data. This we can state because of the presence of the absolute zero in the scale. While dealing with a Kelvin scale we can also use the finding for descriptive and inferential statics. Any Ratio data based scale allows us to add, subtract, divide and multiply the ratios. This property is also present on the Kelvin scale.
c. Fahrenheit Thermometers represent interval based scales because we can deal with the direct values present on the scale.
d. The type of the mobile phone that a person is having is a nominal data. Here each mobile phone represents a different label.
e. A person’s height is a nominal data when we are taking care of a single subject. It is just another entry like the color of the eyes or the color of the hair. However, when we are collecting a database corresponding to the height of a person then it becomes an interval data because we can compare the values (black, 2009).
First I will conduct an exam for a lesson that they learned through recording. Later on, I will compare its result with a lesson where they attended a human lecture. The non-experimental study defines that the subjects should be studied in a natural environment. The ratio of the variables may be high, however, a descriptive finding can be accumulated (Turner, 2014).
Under the experimental study, I will divide the class into two random groups; one of them will learn the lesson with the help of video class. Whereas the other group will attend the regular class, the comparison of the results will give me an idea that which one is better. Experimental research is the research where we can manipulate one variable while keeping rest of them stationary. Here we need to design some groups and treat every group as an evidence of the fact that we are trying to corroborate or find (Mertler, 2018).
Quasi-experimental Research is the next level of the experimental research. In this type of research, we need to check the baseline properties of the groups that we have made. In the current case, we can add some variables like high scorers, low scorers etc. While making groups, after the formation of the group we can subject them to similar chapter under similar conditions (Suter, 2011). The results obtained with the help of this method will be more accurate because baselines are the same and the variables are playing a minimalistic role in the process.
black, k. (2009). Business Statistics: Contemporary Decision Making. New Jersey: Wiley.
Gupta, A. K. (2012). Theory of Sample Surveys. Singapore: World Scientific.
kumar, D. C. (2015). Construction of Interquartile range (IQR) control chart using process. , American International Journal of Research in Science, Technology, Engineering & Mathematics, https://pdfs.semanticscholar.org/f326/6af8020111de4d9ffe1bb6cbe509c7fe0fde.pdf.
Lavy, P. S. and Lemeshow.S (2013). A sampling of Populations: Methods and Applications 4th Edition . New Jersey: Wiley.
Mertler, C. A. (2018). Introduction to Educational Research. California: Sage Publications.
S, K. and Banerjee. M (2015). Logic and Its Applications: 6th Indian Conference, ICLA 2015, Mumbai, India. ICIA 2015 (pp. 58-59). Germany Springer.
Suter, W. N. (2011). introduction to Educational Research: A Critical Thinking Approach. California: Sage Publications.
Thompson, S. K. (2012). Sampling. New Jersey: Wiley.
Turner, J. L. (2014). Using Statistics in Small-Scale Language Education Research: London: Routledge.