Normality of the data should be determined because parametric or non-parametric statistical methods are used in the analysis, depending on data have a normal distribution or not. Non-parametric methods are used for data without normal distribution, but they are not as powerful as parametric methods. For this reason, even if the data do not have a normal distribution, parametric methods are used by proceeding with the assumption that they have a normal distribution in most studies. Although normality tests such as Shapiro-Wilk (SW) [1] and Kolmogorov-Smirnov (KS) [2] work in small data sets, these related normality tests fail in big data as the volume of the data set increases, as the dispersion and central tendency measures of the distribution are affected from outliers and extreme values. In this study we have suggested an approach to enlighten on these situations and to propose solutions. Here, the distribution and normal distribution graphs of the data were used by calculating the relevant reference points according to the arithmetic mean from the central tendency criteria and the standard deviation from the dispersion criteria of the data. The separation value is calculated by proportioning the area in between with the area of the normal distribution for the value of separation of the data from the normal distribution. This approach is an attempt to reveal the situation of have a normal distribution without the need for any transformation of this problem that occurs in big data.