code: https://github.com/XiuzeZhou/NASA

1. Introduction

Recent years, with the hot trend of Li-ion batteries, I start my Li-ion battery research project. In Remaining Useful Life (RUL) prediction of Li-ion batteries, as most papers do, we use two public data sets: NASA and CALCE. The NASA data set, available from the NASA Ames Research Center web site. CALCE data set is available from the Center for Advanced Life Cycle Engineering (CALCE) of the University of Maryland.

Next, I will write a series blogs of classic neural network frameworks, such as Transformer, LSTM, RNN and MLP, to predict the life of Li-ion batteries. Now, Let's start with the simplest neural netowor, MLP.

2. NASA Dataset

I. Description

Temperature: 24 C

Charging: carried out in a constant current (CC) mode at 1.5A until the battery voltage reached 4.2V and then continued in a constant voltage (CV) mode until the charge current dropped to 20mA.

Discharge: carried out at a constant current (CC) level of 2A until the battery voltage fell to 2.7V, 2.5V, 2.2V and 2.5V for batteries 5 6 7 and 18 respectively.

End Of Life (EOF): the value of remaining capacity reaches 70-80% of initial capacity, i.e., from 2 Ahr to 1.4 Ahr.

II. Data Preprocessing

First, we read .mat files, and select some key information: capacity, current and voltage.

More details, please look at my blog: https://snailwish.com/395/.

After loading the data, we defined a function to view the capacity curve with the cycle of charge/discharge:

fig, ax = plt.subplots(1, figsize=(12, 8))
color_list = ['b:', 'g--', 'r-.', 'c:']
c = 0
for name,color in zip(Battary_list, color_list):
    df_result = Battery[name]
    ax.plot(df_result[0], df_result[1], color, label=name)

ax.set(xlabel='Discharge cycles', ylabel='Capacity (Ah)', 
       title='Capacity degradation at ambient temperature of 24°C')
plt.legend()

3. Training and Test Data

I. Generate Samples

Li-ion battery capacity is a decreasing trend series. We use a sliding window with length, window_size, to capture samples from head to tail of the sequence. For example, the original sequence is [1, 2, 3, 4, 5], window_size=3, then the training data and labels corresponding to (x, y) are ([1, 2, 3], 4), ([2, 3, 4], 5).

def build_sequences(text, window_size):
    #text:list of capacity
    x, y = [],[]
    for i in range(len(text) - window_size):
        sequence = text[i:i+window_size]
        target = text[i+1:i+1+window_size]

        x.append(sequence)
        y.append(target)

    return np.array(x), np.array(y)

II. Training and Test Data

As NASA has only four records, a leave-one-out evaluation is used to evaluate models: one battery is sampled randomly; the remainder are used for training. Finally, after five iterations, the average score over all batteries is determined.

Therefore, all the data from three Li-ion batteries and partial data from one lithium battery (train_ratio) are used as the training set, and the remaining data is used as the test set.

def get_train_test(data_dict, name, window_size=8, train_ratio=0.):
    data_sequence=data_dict[name][1]
    train_data, test_data = data_sequence[:window_size+1], data_sequence[window_size+1:]
    train_x, train_y = build_sequences(text=train_data, window_size=window_size)
    for k, v in data_dict.items():
        if k != name:
            data_x, data_y = build_sequences(text=v[1], window_size=window_size)
            train_x, train_y = np.r_[train_x, data_x], np.r_[train_y, data_y]

    return train_x, train_y, list(train_data), list(test_data)

4. MLP Network

I. Definition

The definition of an MLP is simple. The main parameter is the number of hidden layers.

class Net(nn.Module):
    def __init__(self, feature_size=8, hidden_size=[16, 8]):
        super(Net, self).__init__()
        self.feature_size, self.hidden_size = feature_size, hidden_size
        self.layer0 = nn.Linear(self.feature_size, self.hidden_size[0])
        self.layers = [nn.Sequential(
            nn.Linear(self.hidden_size[i], self.hidden_size[i+1]), nn.ReLU()) 
                       for i in range(len(self.hidden_size) - 1)]
        self.linear = nn.Linear(self.hidden_size[-1], 1)

    def forward(self, x):
        out = self.layer0(x)
        for layer in self.layers:
            out = layer(out)
        out = self.linear(out) 
        return out

II. Evaluation Metrics

a. Root Mean Square Error (RMSE)

b. Mean Absolute Error (MAE)

c. Relative Error (RE)

III. Training Function

def tain(LR=0.01, feature_size=8, hidden_size=[16,8], weight_decay=0.0, 
         window_size=8, EPOCH=1000, seed=0):
    mae_list, rmse_list, re_list = [], [], []
    result_list = []
    for i in range(4):
        name = Battary_list[i]
        train_x, train_y, train_data, test_data = get_train_test(Battery, name, window_size)
        train_size = len(train_x)
        print('sample size: {}'.format(train_size))

        setup_seed(seed)
        model = Net(feature_size=feature_size, hidden_size=hidden_size)
        if torch.cuda.is_available():
            model = model.cuda()

        optimizer = torch.optim.Adam(model.parameters(), lr=LR, weight_decay=weight_decay)
        criterion = nn.MSELoss()

        test_x = train_data.copy()
        loss_list, y_ = [0], []
        for epoch in range(EPOCH):
            X = np.reshape(train_x/Rated_Capacity, (-1, feature_size)).astype(np.float32)
            y = np.reshape(train_y[:,-1]/Rated_Capacity,(-1,1)).astype(np.float32)

            X, y = torch.from_numpy(X), torch.from_numpy(y)
            output= model(X)
            loss = criterion(output, y)
            optimizer.zero_grad()              # clear gradients for this training step
            loss.backward()                    # backpropagation, compute gradients
            optimizer.step()                   # apply gradients

            if (epoch + 1)%100 == 0:
                test_x = train_data.copy() #每100次重新预测一次
                point_list = []
                while (len(test_x) - len(train_data)) < len(test_data):
                    x = np.reshape(np.array(test_x[-feature_size:])/Rated_Capacity, 
                                   (-1, feature_size)).astype(np.float32)
                    x = torch.from_numpy(x)
                    pred = model(x) 
                    next_point = pred.data.numpy()[0,0] * Rated_Capacity
                    test_x.append(next_point)#测试值加入原来序列用来继续预测下一个点
                    point_list.append(next_point)#保存输出序列最后一个点的预测值
                y_.append(point_list)#保存本次预测所有的预测值
                loss_list.append(loss)
                mae, rmse = evaluation(y_test=test_data, y_predict=y_[-1])
                re = relative_error(
                    y_test=test_data, y_predict=y_[-1], threshold=Rated_Capacity*0.7)
                print('epoch:{:<2d} | loss:{:<6.4f} | MAE:{:<6.4f} | RMSE:{:<6.4f} | \
                RE:{:<6.4f}'.format(epoch, loss, mae, rmse, re))
            if (len(loss_list) > 1) and (abs(loss_list[-2] - loss_list[-1]) < 1e-5):
                break

        mae, rmse = evaluation(y_test=test_data, y_predict=y_[-1])
        re = relative_error(
            y_test=test_data, y_predict=y_[-1], threshold=Rated_Capacity*0.7)
        mae_list.append(mae)
        rmse_list.append(rmse)
        re_list.append(re)
        result_list.append(y_[-1])
    return re_list, mae_list, rmse_list, result_list

5. Performance

I. Quantitative Evaluation

After setting all parameters, we run 10 times under different seeds and take the average.

window_size = 8
EPOCH = 1000
LR = 0.01    # learning rate
feature_size = window_size
hidden_size = [16,8]
weight_decay = 0.0
Rated_Capacity = 2.0

MAE, RMSE, RE = [], [], []
for seed in range(10):
    re_list, mae_list, rmse_list, _ = tain(LR, feature_size, hidden_size, 
                                           weight_decay, window_size, EPOCH, seed)
    RE.append(np.mean(np.array(re_list)))
    MAE.append(np.mean(np.array(mae_list)))
    RMSE.append(np.mean(np.array(rmse_list)))
    print('------------------------------------------------------------------')

print('RE: mean: {:<6.4f} | std: {:<6.4f}'.format(
    np.mean(np.array(RE)), np.std(np.array(RE))))
print('MAE: mean: {:<6.4f} | std: {:<6.4f}'.format(
    np.mean(np.array(MAE)), np.std(np.array(MAE))))
print('RMSE: mean: {:<6.4f} | std: {:<6.4f}'.format(
    np.mean(np.array(RMSE)), np.std(np.array(RMSE))))
print('------------------------------------------------------------------')
print('------------------------------------------------------------------')
MAE RMSE RE
0.0852 0.0959 0.4185

II. Qualitative Evaluation

Next, look at the predicted performance: the fitting curve for each battery.

seed = 0
_, _, _, result_list = tain(LR, feature_size, hidden_size, 
                            weight_decay,window_size, EPOCH, seed)
for i in range(4):
    name = Battary_list[i]
    train_x, train_y, train_data, test_data = get_train_test(Battery, name, window_size)

    aa = train_data[:window_size+1].copy() # 第一个输入序列
    [aa.append(a) for a in result_list[i]] # 测试集预测结果

    battery = Battery[name]
    fig, ax = plt.subplots(1, figsize=(12, 8))
    ax.plot(battery[0], battery[1], 'b.', label=name)
    ax.plot(battery[0], aa, 'r.', label='Prediction')
    plt.plot([-1,170],[Rated_Capacity*0.7, Rated_Capacity*0.7], c='black', lw=1, ls='--')
    ax.set(xlabel='Discharge cycles', ylabel='Capacity (Ah)', 
           title='Capacity degradation at ambient temperature of 24°C')
    plt.legend()




III. Conclusion

By using leave-one-out evaluation, results show that MLP do not work well on the NASA data set. Probably for two reasons:

(1) There are a large number of jump points (capacity regeneration phenomenon) in the capacity sequence of the battery, especially at the beginning of the curve, which makes it difficult for the model to learn a good trend from the battery history record;

(2) Difference between these records is large, such as B0007 without any data after 1.4Ah, B0018 data is very short and serious fluctuations, leading to difficult training;

Note: Good results can be obtained if the capacity curve is short and smooth. See the results of the MLP on the CALCE dataset.

6. More