Item Analysis and Item-Total Correlation

In the item analysis phase, we used a simple method to see how well our questionnaire could tell apart different responses from participants. We ranked the scores and split participants into an upper group (top 33%) and a lower group (bottom 33%). Then, we ran a t-test to check if there were significant differences between these groups. An item was kept if its p-value was less than 0.05, which showed it effectively differentiated responses. All 18 items in our initial questionnaire met this criterion, so we kept them.
Next, we assessed homogeneity by calculating the Pearson correlation between each item and the total score. Items with a correlation over 0.4 were considered strong. After our analysis, all 18 items passed this test, meaning they were consistent with the overall score.
Exploratory Factor Analysis (EFA)
We analyzed the 152 valid surveys collected during the 2023 Torch Festival using SPSS Software. The Kaiser-Meyer-Olkin (KMO) measure gave us a value of 0.865, and Bartlett’s test confirmed the data was suitable for factor analysis with a p-value of less than 0.01.
Using principal component analysis, we set criteria for item loading to filter out weak items. We removed five items that did not meet our standards, leaving 13 strong items. These items explained 60.192% of the total variance, showing the factors found were reliable.
Confirmatory Factor Analysis (CFA)
We further verified our 13 items through CFA using Smart PLS software with 433 valid surveys. We measured reliability and validity through three key indicators: reliability, convergent validity, and discriminant validity.
Reliability indicates whether our questionnaire consistently measures what it’s supposed to. We used Cronbach’s alpha and composite reliability (CR) for this. Our results showed good reliability, with Cronbach’s alpha values between 0.688 and 0.750.
For validity, we checked if our measurement reflected the intended concepts. The average variance extracted (AVE) values were satisfactory, indicating good convergent validity. Discriminant validity was also confirmed since each factor distinctly measured different dimensions.
Structural Model Explanatory Power
We evaluated the overall effectiveness of our model using the coefficient of determination (R²), goodness of fit (GoF), and predictive relevance (Q²). Our model had an R² of 0.978, a Q² of 0.585, and a GoF of 0.764, indicating a strong overall fit.
Path Coefficients and Hypothesis Testing
Using bootstrapping, we assessed the path coefficients and learned that all our hypotheses were valid, as all coefficients had significant p-values and t-values.
Index and Weight Calculation
Lastly, we transformed our findings into a final integration index for sports, culture, and tourism. We based our structure on established guidelines, categorizing indicators into different levels. We calculated weight coefficients for each indicator to find how they contributed to the overall index.
Our results showed weights for perceived event quality, perceived tourism development, and perceived cultural representation. After calculating these, we scored the integration of sports, culture, and tourism in the Guizhou Village Super League as 82.27 points.
In summary, our study successfully created a robust index to measure the integration of sports, culture, and tourism, demonstrating strong reliability and validity across all assessments.
Check out this related article: NFL Coach and GM Interview Tracker: Exciting Updates as Cowboys Hire Brian Schottenheimer, Leaving Saints as the Final Vacancy
Source linkEnvironmental social sciences,Psychology,Tourist perception,Integrated development,Evaluation index system,Science,Humanities and Social Sciences,multidisciplinary