论如何计算 NOIP2024 分数线

· · 科技·工程

思路

首先,我们知道以下一些事实

结合以上相关信息,我们会得出这样一个方案:

因此,我们不妨这样做:

A 问题:利用一些机器学习算法,根据今年 SD、ZJ、JX 三个省份的准确分数线(三者都区分了初高中生) 和 NOIP2023 其余省份与 SD、ZJ、JX 的相对分数关系,计算出每个省的省一、二、三等奖得分 \alpha_i',\beta_i',\theta_i'

B 问题: 直接通过云斗榜,按比例计算出每个省的省一、二、三等奖得分 \alpha_i'',\beta_i'',\theta_i''

最后,取 \alpha_i = \dfrac{\alpha_i' + \alpha_i''}{2}\beta_i = \dfrac{\beta_i' + \beta_i''}{2}\theta_i = \dfrac{\theta_i' + \theta_i''}{2} 即可得到一个相对准确的估计。

方案实施

A 问题

首先,去 noi 官网找到 NOIP 各省份一、二、三等奖的分数线。

之后,抽象问题如下:

设有 n 个点,每个点每个时刻会有三个特征值 a_i,b_i,c_i. 现在,我们知道了 t-1 时刻全部点的特征值 a_{i}(t-1),b_{i}(t-1),c_{i}(t-1),以及 t 时刻的 1,2,3 号点的特征值 a_{1\sim 3}(t),b_{1\sim 3}(t),c_{1\sim 3}(t). 同时,我们知道这样一些信息:

  • 不同时刻、同一点的分数,可能存在某种相对关系。
  • 同一时刻、不同点的分数,可能存在某种相对关系。 请结合以上信息,给出 t 时刻其余点的全部特征预测。

不难发现,这其实是本质是一个时空图机器学习问题,即时序预测 + 空间图关系分析的 combination。那么,我们可以采取如下思路:

综上,这部分机器学习模型并不复杂。以下为该部分的代码实现:

import numpy as np
import pandas as pd
from sklearn.linear_model import LinearRegression
from sklearn.impute import SimpleImputer

df = pd.read_csv('a.csv', header=None, skip_blank_lines=True)
df.columns = ['PointID', 'Feature_a', 'Feature_b', 'Feature_c']

df_t_minus_1 = df.iloc[0:23].copy()
df_t = df.iloc[25:28].copy()

df_t_minus_1 = df_t_minus_1.replace('N/A', np.nan)
df_t = df_t.replace('N/A', np.nan)

# 保存 N/A 信息的掩码
na_mask = df_t_minus_1[['Feature_a', 'Feature_b', 'Feature_c']].isna()

for col in ['Feature_a', 'Feature_b', 'Feature_c']:
    df_t_minus_1[col] = pd.to_numeric(df_t_minus_1[col], errors='coerce')
    df_t[col] = pd.to_numeric(df_t[col], errors='coerce')

known_points = df_t['PointID'].iloc[:3].values

# 对齐 t-1 数据的点顺序与 known_points
df_t_minus_1.set_index('PointID', inplace=True)
df_t.set_index('PointID', inplace=True)

# 提取特征矩阵
features_t_minus_1 = df_t_minus_1[['Feature_a', 'Feature_b', 'Feature_c']].values

# t 时刻已知点的特征 (train_y)
train_y = df_t.iloc[:3][['Feature_a', 'Feature_b', 'Feature_c']].values

# ---------- 构造差分和排名特征 ----------
ref_point_1 = features_t_minus_1[0]
ref_point_2 = features_t_minus_1[1]
ref_point_3 = features_t_minus_1[2]

diff_features_1 = features_t_minus_1 - ref_point_1
diff_features_2 = features_t_minus_1 - ref_point_2
diff_features_3 = features_t_minus_1 - ref_point_3

n = features_t_minus_1.shape[0]
ranks_a = np.argsort(np.argsort(features_t_minus_1[:, 0])) / (n - 1 if n > 1 else 1)
ranks_b = np.argsort(np.argsort(features_t_minus_1[:, 1])) / (n - 1 if n > 1 else 1)
ranks_c = np.argsort(np.argsort(features_t_minus_1[:, 2])) / (n - 1 if n > 1 else 1)
ranks_features = np.column_stack([ranks_a, ranks_b, ranks_c])

# 构造最终特征矩阵 X_t_minus_1 (15维)
X_t_minus_1 = np.hstack([
    features_t_minus_1,
    diff_features_1,
    diff_features_2,
    diff_features_3,
    ranks_features
])

# 填补缺失值
imputer = SimpleImputer(strategy='mean')
X_t_minus_1_imputed = imputer.fit_transform(X_t_minus_1)

# 保存 PointID
point_ids = df_t_minus_1.index.tolist()

# 训练模型
train_X = X_t_minus_1_imputed[:3, :]
model = LinearRegression()
model.fit(train_X, train_y)

# 对所有点在 t 时刻进行预测
predicted_features_t = model.predict(X_t_minus_1_imputed)

# 对已知的前三个点,用已知数据替换预测结果
predicted_features_t[:3] = train_y

# 将预测结果放回 DataFrame
df_predict_t = pd.DataFrame(predicted_features_t, columns=['Feature_a', 'Feature_b', 'Feature_c'])
df_predict_t.insert(0, 'PointID', point_ids)  # 插入 PointID 列

# 恢复原本为 N/A 的单元格
for col in ['Feature_a', 'Feature_b', 'Feature_c']:
    df_predict_t.loc[na_mask[col], col] = np.nan  # 恢复 NaN

print("Predicted features at time t with PointID and restored N/A values:")
print(df_predict_t)

之后,我们可以得到这样的预测结果:

   PointID   Feature_a   Feature_b  Feature_c
0       JX  178.000000  100.000000  40.000000
1       ZJ  220.000000  120.000000  59.000000
2       SD  155.000000   88.000000  40.000000
3       AH  110.847451   64.437825        NaN
4       BJ  190.309958  104.300134  61.079819
5       FJ  186.755121  102.367599  61.852881
6       GS   75.436955   47.138863        NaN
7       GD  212.775890  116.513375  56.194495
8       GX   98.973446   58.360403        NaN
9       GZ   75.458795   46.835995        NaN
10      HI   79.365810   48.645238        NaN
11      HL   81.944253   49.417502        NaN
12      HA   88.677821   52.763350        NaN
13      HB  167.215532   92.941218  54.223389
14      HN  213.331514  115.493539  69.202466
15      JS  159.406763   89.451465  48.419357
16      LN  143.609163   81.996402  40.601396
17      NM   71.243282   44.544305        NaN
18      CQ  263.184353  140.643888  77.742119
19      SX   75.150297   46.353549        NaN
20      SH  191.575717  106.184227  48.926281
21      SC  188.748750  103.388452  62.044580
22      TJ   92.026375   54.898477        NaN

B 问题

这部分需要直接通过云斗榜,按比例计算出每个省的省一、二、三等奖得分。

问题的合并

按原计划,A 和 B 直接取一个平均值即可。但 CCF 对 NOIP 的二三等奖,表意不清:

NOI 科学委员会根据各省的分数分布情况,设定各省二等奖和三等奖的分数线,并且决定各省的获奖比例以及是否设立三等奖。

省内单独划线和全国划线的结果肯定是不同的。

因此,最终对一等线选择做均值,二、三等留作讨论。

实验结果

有点抽象了家人们。图一乐还是挺好的。吧。

全部代码

# Author : Vianka Peng
# Date : 12/7/2024
# Quote : For NOIP 2024 

import numpy as np
import pandas as pd
from sklearn.metrics import mean_absolute_percentage_error
from sklearn.linear_model import LinearRegression
from sklearn.impute import SimpleImputer

df = pd.read_csv('a.csv', header=None, skip_blank_lines=True)
df.columns = ['PointID', 'Feature_a', 'Feature_b', 'Feature_c']

df_t_minus_1 = df.iloc[0:23].copy()
df_t = df.iloc[25:28].copy()

# 替换 N/A 为 np.nan
df_t_minus_1 = df_t_minus_1.replace('N/A', np.nan)
df_t = df_t.replace('N/A', np.nan)

# 保存 N/A 信息的掩码
na_mask = df_t_minus_1[['Feature_a', 'Feature_b', 'Feature_c']].isna()

# 转换数值
for col in ['Feature_a', 'Feature_b', 'Feature_c']:
    df_t_minus_1[col] = pd.to_numeric(df_t_minus_1[col], errors='coerce')
    df_t[col] = pd.to_numeric(df_t[col], errors='coerce')

# 假设在 t 时刻已知的 3 个点为 df_t 的前3行(也可根据实际情况筛选)
known_points = df_t['PointID'].iloc[:3].values

# 对齐 t-1 数据的点顺序与 known_points
df_t_minus_1.set_index('PointID', inplace=True)
df_t.set_index('PointID', inplace=True)

# 提取特征矩阵
features_t_minus_1 = df_t_minus_1[['Feature_a', 'Feature_b', 'Feature_c']].values

# t 时刻已知点的特征 (train_y)
train_y = df_t.iloc[:3][['Feature_a', 'Feature_b', 'Feature_c']].values

# ---------- 构造差分和排名特征 ----------
ref_point_1 = features_t_minus_1[0]
ref_point_2 = features_t_minus_1[1]
ref_point_3 = features_t_minus_1[2]

diff_features_1 = features_t_minus_1 - ref_point_1
diff_features_2 = features_t_minus_1 - ref_point_2
diff_features_3 = features_t_minus_1 - ref_point_3

n = features_t_minus_1.shape[0]
ranks_a = np.argsort(np.argsort(features_t_minus_1[:, 0])) / (n - 1 if n > 1 else 1)
ranks_b = np.argsort(np.argsort(features_t_minus_1[:, 1])) / (n - 1 if n > 1 else 1)
ranks_c = np.argsort(np.argsort(features_t_minus_1[:, 2])) / (n - 1 if n > 1 else 1)
ranks_features = np.column_stack([ranks_a, ranks_b, ranks_c])

# 构造最终特征矩阵 X_t_minus_1 (15维)
X_t_minus_1 = np.hstack([
    features_t_minus_1,
    diff_features_1,
    diff_features_2,
    diff_features_3,
    ranks_features
])

# 填补缺失值
imputer = SimpleImputer(strategy='mean')
X_t_minus_1_imputed = imputer.fit_transform(X_t_minus_1)

# 保存 PointID
point_ids = df_t_minus_1.index.tolist()

# 训练模型
train_X = X_t_minus_1_imputed[:3, :]
model = LinearRegression()
model.fit(train_X, train_y)

# 对所有点在 t 时刻进行预测
predicted_features_t = model.predict(X_t_minus_1_imputed)

# 对已知的前三个点,用已知数据替换预测结果
predicted_features_t[:3] = train_y

# 将预测结果放回 DataFrame
df_predict_t = pd.DataFrame(predicted_features_t, columns=['Feature_a', 'Feature_b', 'Feature_c'])
df_predict_t.insert(0, 'PointID', point_ids)  # 插入 PointID 列

# 恢复原本为 N/A 的单元格
for col in ['Feature_a', 'Feature_b', 'Feature_c']:
    df_predict_t.loc[na_mask[col], col] = np.nan  # 恢复 NaN

# 读取 b.csv 文件
df_b = pd.read_csv('b.csv', header=None)
df_b.columns = ['PointID', 'Feature_a_b', 'Feature_b_b', 'Feature_c_b']

# 确保 b.csv 的 PointID 顺序与预测结果一致
df_b.set_index('PointID', inplace=True)
df_predict_t.set_index('PointID', inplace=True)

predicted_a = df_predict_t['Feature_a'] 
actual_a = df_b['Feature_a_b'] 
predicted_a = predicted_a.astype(float)
actual_a = actual_a.astype(float)

print(predicted_a)
print(actual_a)

mape = mean_absolute_percentage_error(actual_a, predicted_a)

print(f"Mean Absolute Error (MAPE): {mape}")

# 确保 PointID 顺序严格一致
df_b = df_b.loc[df_predict_t.index]

# 对 Feature_a 取均值并四舍五入
df_predict_t['Feature_a'] = np.round((df_predict_t['Feature_a'] + df_b['Feature_a_b']) / 2, 0)
df_predict_t['Feature_b'] = np.round(df_predict_t['Feature_b'], 0)
df_predict_t['Feature_c'] = np.round(df_predict_t['Feature_c'], 0)

df_predict_t = df_predict_t.replace(np.nan, '不设立')
df_b = df_b.replace(np.nan, '不设立')
# 对 Feature_b 和 Feature_c 记录为 'x/y' 格式
df_predict_t['Feature_b'] = df_predict_t['Feature_b'].astype(str) + ' / ' + df_b['Feature_b_b'].astype(str)
df_predict_t['Feature_c'] = df_predict_t['Feature_c'].astype(str) + ' / ' + df_b['Feature_c_b'].astype(str)

df_predict_t.reset_index(inplace=True)

print("Final DataFrame with PointID, updated Feature_a, and combined Feature_b and Feature_c:")
print(df_predict_t)
df_predict_t.to_csv('c.csv', index=False)

彩蛋

如果我们认为机器学习是预测值,按排名算的是真实值,那么,我们为什么不比较一下【A 问题】和【B 问题】的MAPE(Mean Absolute Percentage Error) 呢?

\text{MAPE} = \frac{1}{n} \sum_{i=1}^{n} \left| \frac{y_i - \hat{y}_i}{y_i} \right| \times 100

其中, y_i 是真实值,\hat{y}_i 是预测值。

计算后,我们惊奇地发现 MAPE 达到了:

Mean Absolute Error (MAPE): 0.18790530294048333
疑似机器学习又一失败案例。 不过也有可能是数据太少、模型太简单了。Who Knows? 况且【B 问题】也不是绝对意义上的「真实值」,是包含了初中生的结果。