final_project_report (1) Essay

CHAPTER-7

Experiments and Results

The accuracy of the bounding boxes and the prediction percentage in the results depends on the

1) Batch size

2) Learning rate

3) Number of training iterations

Batch size is dataset trained per batch in one iteration of training.

Learning Rate is the training parameter that controls the size of weight and bias changes during learning.

Number of Iterations is the number of training iterations after which the network is optimally trained.

IOU: Intersection over Union is an evaluation metric used to measure the accuracy of an object detector on a particular dataset.

In the numerator we compute the area of overlap between the predicted bounding box and the ground-truth bounding box. The denominator is the area of union, or more simply, the area encompassed by both the predicted bounding box and the ground-truth bounding box. Dividing the area of overlap by the area of union yields our final score the Intersection over Union.

7.1 Establishing Optimal parameters

This section of the project deals with experimenting with various training parameters are used to decide on an optimal set of parameters.

7.1.1 Batch Size

Batch sizes were set to 10 and experiments were carried out.

Batch size 10 proved optimal with a fast training speed and also efficient predictions when tested.

7.1.2 Learning Rate

1. Learning Rate (0.0001)

Activity : Jogging

Activity : Lying Down

Activity : Sitting

Activity : Stairs

Activity : Standing

Activity : Walking

7.1.3 Classification Report for CNN :

7.1.4 Classification Report for RNN :

7.1.4 Accuracy and Error rate of CNN :

7.1.5 Accuracy and Error rate of RNN :

7.2 Comparison Graph

The developed models are compared for the efficiency (RNN and CNN) for Human Activity Classification.

CHAPTER-8

Conclusion

This project is based on Convolution Neural Networks and Recurrent Neural Networks to classify Human Activities 0n daily basis and compare the accuracy of both the methods.

The main objective of the proposed system is achieved using the following modules:

Using the customized standard dataset for the system consisting of numerical data of all four types of classes. The Jupyter notebook and a python script is used for generating the labels.

The model for both Convolution Neural Networks and Recurrent Neural Networks was developed to which the standard dataset was uploaded.

Training the model for accuracy, precision and recall values from the appropriate models.

The accuracy obtained from Convolution Neural Network is 95.02% and is lesser compared to Recurrent Neural Networks, where in the accuracy obtained from RNN Model is 98.44%.

The loss and error percentage is also less in Recurrent Neural Networks(RNN Model),when compared to that of Convolution Neural Network.

The differences in the accuracy, loss, error values between Convolution Neural Networks and Recurrent Neural Networks is because CNN take a fixed size input and generate fixed-size outputs and the variations of multilayer perceptrons which are designed to use minimal amounts of preprocessing.

RNN, unlike feedforward neural networks(CNN), can use their internal memory to process arbitrary sequences of inputs and thus produce more accuracy than CNN.

Future Work

The proposed system gives a method for Human Activity Classification using numerical data based deep learning. The system can be developed to automate the process. Some of the future enhancements for the proposed system can be :

Improving the system for data capturing through sensors.

Improving the accuracy of CNN Model.

More efficient model to be developed to classify human activites.

Developing model for capturing and classifying more classes(Human activites).

Capturing more real time labelled for improving the efficiency.

REFERENCES

[1] L. Chen, J. Hoey, C. D. Nugent, D. J. Cook, and Z. Yu, “Sensor-based activity recognition,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 42, no. 6, pp. 790–808, Nov. 2012.

[2] O. D. Lara and M. A. Labrador, “A survey on human activity recognition using wearable sensors,” IEEE Commun. Surveys Tuts., vol. 15, no. 3, pp. 1192–1209, 3rd Quart., 2013.

[3] A. Bulling, U. Blanke, and B. Schiele, “A tutorial on human activity recognition using body-worn inertial sensors,” ACM Comput. Surv., vol. 46, no. 3, p. 33, 2014.

[4] N. Ravi, N. Dandekar, P. Mysore, and M. L. Littman, “Activity recognition from accelerometer data,” in Proc. 17th Conf. Innov. Appl. Artif. Intell. (IAAI), vol. 3, 2005, pp. 1541–1546.

[5] L. Bao and S. S. Intille, “Activity recognition from user-annotated acceleration data,” in Pervasive Computing. Berlin, Germany: Springer, 2004, pp. 1–17.

[6] W. Jiang and Z. Yin, “Human activity recognition using wearable sensors by deep convolutional neural networks,” in Proc. 23rd ACM Int. Conf. Multimedia ACM, 2015, pp. 1307–1310.

[7] G. M. Weiss and J. W. Lockhart, “The impact of personalization on smartphone-based activity recognition,” in Proc. AAAI Workshop Activity Context Represent., Techn. Lang., 2012, pp. 98–104.

[8] K. Altun and B. Barshan, “Human activity recognition using inertial/magnetic sensor units,” in Proc. Int. Workshop Hum. Behav. Understand. Berlin, Germany: Springer, 2010, pp. 38–51.

[9] Z. Wang, M. Jiang, Y. Hu, and H. Li, “An incremental learning method based on probabilistic neural networks and adjustable fuzzy clustering for human activity recognition by using wearable sensors,” IEEE Trans. Inf. Technol. Biomed., vol. 16, no. 4, pp. 691–699, Jul. 2012.

[10] D. Coskun, O. D. Incel, and A. Ozgovde, “Phone position/placement detection using accelerometer: Impact on activity recognition,” in Proc. IEEE 10th Int. Conf. Intell. Sensors, Sensor Netw. Inf. Process. (ISSNIP), Apr. 2015, pp. 1–6.

Appendix : Code

HAR.py :import pandas as pdimport numpy as npimport matplotlib.pyplot as pltfrom scipy import stats

from keras.models import Sequential

from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout, LSTM

from keras.optimizers import Adam

from sklearn.metrics import classification_reportimport glob

import h5py

import itertools%matplotlib inline

from pathlib import Path

random_seed = 611

np.random.seed(random_seed)

def readData(filePath):

columnNames = [‘user_id’,’activity’,’timestamp’,’x-axis’,’y-axis’,’z-axis’]

data = pd.read_csv(filePath,header = None, names=columnNames,na_values=’;’)

return data

def read_multiple_data(files):

column_names = [‘user-id’,’activity’,’timestamp’,’x-axis’,’y-axis’,’z-axis’]

df = pd.concat([pd.read_csv(f,header = None, names = column_names,sep = ‘,’) for f in glob.glob(files)],ignore_index = True)

return dfdef featureNormalize(dataset):

mu = np.mean(dataset,axis=0)

sigma = np.std(dataset,axis=0)

return (dataset-mu)/sigma

def plotAxis(axis,x,y,title):

axis.plot(x,y)

axis.set_title(title)

axis.xaxis.set_visible(False)

axis.set_ylim([min(y)-np.std(y),max(y)+np.std(y)])

axis.set_xlim([min(x),max(x)])

axis.grid(True)

def plotActivity(activity,data):

fig,(ax0,ax1,ax2) = plt.subplots(nrows=3, figsize=(15,10),sharex=True)

plotAxis(ax0,data[‘timestamp’],data[‘x-axis’],’x-axis’)

plotAxis(ax1,data[‘timestamp’],data[‘y-axis’],’y-axis’)

plotAxis(ax2,data[‘timestamp’],data[‘z-axis’],’z-axis’)

plt.subplots_adjust(hspace=0.2)

fig.suptitle(activity)

plt.subplots_adjust(top=0.9)

plt.show()def windows(data,size):

start = 0

while start< data.count():

yield int(start), int(start + size)

start+= (size/2)

def segment_signal(data, window_size = 90):

segments = np.empty((0,window_size,3))

labels= np.empty((0))

for (start, end) in windows(data[‘timestamp’],window_size):

x = data[‘x-axis’][start:end]

y = data[‘y-axis’][start:end]

z = data[‘z-axis’][start:end]

if(len(data[‘timestamp’][start:end])==window_size):

segments = np.vstack([segments,np.dstack([x,y,z])])

labels = np.append(labels,stats.mode(data[‘activity’][start:end])[0][0])

return segments, labels

dataset = read_multiple_data(‘input/WISDM_at_v2.0_raw_modifya*’)

dataset.dropna(axis = 0, how = ‘any’ , inplace = True)

segments, labels = segment_signal(dataset)

class_labels = np.unique(labels)

labels = np.asarray(pd.get_dummies(labels),dtype = np.int8)

labelsarray([[0, 0, 0, 0, 0, 1],

[0, 0, 0, 0, 0, 1],

[0, 0, 0, 0, 0, 1],

…,

[0, 1, 0, 0, 0, 0],

[0, 1, 0, 0, 0, 0],

[0, 1, 0, 0, 0, 0]], dtype=int8)

numOfRows = segments.shape[1]

numOfColumns = segments.shape[2]

print(numOfRows, numOfColumns)

90 3

CNN

reshapedSegments = segments.reshape(segments.shape[0], numOfRows, numOfColumns,1)

print(reshapedSegments.shape)

(11628, 90, 3, 1)

sr = np.random.rand(len(reshapedSegments)) < 0.8

c_X_train = reshapedSegments[sr]

c_X_test = reshapedSegments[~sr]

c_X_train = np.nan_to_num(c_X_train)

c_X_test = np.nan_to_num(c_X_test)

c_Y_train = labels[sr]

c_Y_test = labels[~sr]

print(c_X_train.shape, c_Y_train.shape, c_X_test.shape, c_Y_test.shape)

(9277, 90, 3, 1) (9277, 6) (2351, 90, 3, 1) (2351, 6)

cnn_model = Sequential()cnn_model.add(Conv2D(128, (2,2),input_shape=(90, 3, 1),activation=’relu’))

cnn_model.add(MaxPooling2D(pool_size=(2,2),padding=’valid’))

cnn_model.add(Dropout(0.2))

cnn_model.add(Flatten())

cnn_model.add(Dense(128, activation=’relu’))

cnn_model.add(Dense(128, activation=’relu’))

cnn_model.add(Dense(6, activation=’softmax’))

adam = Adam(lr = 0.0001, decay=1e-6)

cnn_model.compile( loss=’categorical_crossentropy’,

optimizer=adam,

metrics=[‘accuracy’]

)

cnn_model.fit( c_X_train,

c_Y_train,

validation_data=(c_X_test, c_Y_test),

epochs=10,

batch_size=10

)

cnn_score = cnn_model.evaluate(c_X_test,c_Y_test)

print(cnn_score)

2351/2351 [==============================] – 0s 198us/step

[0.18344726768864603, 0.9502339430029775]

Y_cnn_pred = cnn_model.predict_classes(c_X_test)

print(Y_cnn_pred.shape, c_Y_test.shape)

(2351,) (2351, 6)

cnn_model_str= cnn_model.to_json()f = Path(“model/cnn_model_str.json”)

f.write_text(cnn_model_str)

2598

cnn_model.save_weights(‘model/cnn_model_weights.h5’)

LSTM

new_shaped_data = segments.reshape(segments.shape[0], numOfRows, numOfColumns)

print(new_shaped_data.shape)

(11628, 90, 3)

sratio = np.random.rand(len(new_shaped_data)) < 0.8

X_train = new_shaped_data[sratio]

X_test = new_shaped_data[~sratio]

X_train = np.nan_to_num(X_train)

X_test = np.nan_to_num(X_test)

Y_train = labels[sratio]

Y_test = labels[~sratio]

lstm_model = Sequential()lstm_model.add(LSTM(128, input_shape=(90, 3)))

lstm_model.add(Dropout(0.25))

lstm_model.add(Dense(6, activation=’softmax’))

lstm_model.compile( loss=’binary_crossentropy’,

optimizer=adam,

metrics=[‘accuracy’]

)

lstm_model.fit( X_train,

Y_train,

validation_data=(X_test, Y_test),

epochs=10,

batch_size=10

)

lstm_score = lstm_model.evaluate(X_test,Y_test,verbose=2)

print(lstm_score)

[0.05474479000914698, 0.9843684068751527]

Y_lstm_pred = lstm_model.predict_classes(X_test)

print(Y_lstm_pred.shape, Y_test.shape)

(2367,) (2367, 6)

lstm_model_str= lstm_model.to_json()f = Path(“model/lstm_model_str.json”)

f.write_text(lstm_model_str)

1637

lstm_model.save_weights(‘model/lstm_model_weights.h5’)

COMPARISON GRAPHn_groups = 2

ind = np.arange(n_groups)

algo = (‘CNN’,’LSTM’)

accuracy = (cnn_score[1], lstm_score[1])

error = (1-cnn_score[1], 1-lstm_score[1])

loss = (cnn_score[0], lstm_score[0])

print(error, accuracy, loss)

(0.04976605699702252, 0.0156315931248473) (0.9502339430029775, 0.9843684068751527) (0.18344726768864603, 0.05474479000914698)

fig, ax = plt.subplots(figsize=(8,6), dpi=80)

bar_width = 0.2

opacity = 0.9

p1 = plt.bar(ind+bar_width, loss , bar_width, alpha=opacity, color=’blue’, label=’loss’)

p1 = plt.bar(ind, error , bar_width, alpha=opacity, color=’red’, label=’eror’,bottom=accuracy)

p2 = plt.bar(ind, accuracy , bar_width, alpha=opacity, color=’lime’, label=’accuracy’)

plt.title(‘Comparison Graph’)

plt.xticks(ind+bar_width/2,algo)

plt.xlabel(‘Algorithm’)

plt.ylabel(‘Scores’)

plt.legend()plt.tight_layout()plt.show()TEST MODEL ON NEW DATA

test_data = pd.read_csv(‘input/test_walk.csv’)

test_data.shape(13779, 3)

input_data = test_data.loc[0:89,:]

input_data.head()

x-axis y-axis z-axis

0 0.009521 5.468887 7.698410

1 -0.194946 5.472244 7.702713

2 -0.164063 5.456436 7.709900

3 -0.213623 5.471512 7.703903

4 -0.198776 5.495941 7.685471

input_data = input_data.to_numpy().reshape((1,90,3))

input_data.shape(1, 90, 3)

from keras.models import model_from_jsonfrom pathlib import Path

f = Path(‘model/lstm_model_str.json’)

loaded_lstm_str = f.read_text()loaded_lstm = model_from_json(loaded_lstm_str)

loaded_lstm.load_weights(‘model/lstm_model_weights.h5’)

loaded_lstm.summary()Layer (type) Output Shape Param #

=================================================================

lstm_1 (LSTM) (None, 128) 67584

_________________________________________________________________

dropout_2 (Dropout) (None, 128) 0

_________________________________________________________________

dense_4 (Dense) (None, 6) 774

=================================================================

Total params: 68,358

Trainable params: 68,358

Non-trainable params: 0

predictions = loaded_lstm.predict(input_data)

result = predictions[0]

print(result)

[1.1413740e-02 3.6693010e-03 9.6202236e-01 3.0899953e-04 8.8642472e-03

1.3721360e-02]

most_likely_class_index = int(np.argmax(result))

most_likely_class_indexclass_likelihood = result[most_likely_class_index]

class_likelihood0.96202236

class_label = class_labels[most_likely_class_index]

print(‘Predicted activity is :’, class_label)

Predicted activity is : Sitting

Still stressed from student homework?
Get quality assistance from academic writers!