Image Processing: Capturing Student Attendance Data

Role of the student attendance record is very important in the primary, secondary, and tertiary education. The purpose of this record is monitoring student activity in the teaching and learning process and regarded as one of the important learning assessments. Moreover, a data processing for recording the student attendance is currently done in various ways such as fingerprint, radio frequent identification (RFID), facial recognition system, android-based application, and others. However, many conventional ways (i


INTRODUCTION
Recording the student attendance is very important to carry out because it is an important effort to monitor the level of student activities in the classroom. Moreover, the students' attendance can also be used as a lecturers' assessment. The student attendance record is done in the primary, secondary, and tertiary education. There are various ways to record the student attendance e.g., recording through the paper, fingerprint, and web camera using image processing. These are the methods to record the student attendance. The image processing has largely been done by researchers from lecturer-researcher, studentresearcher, and publics to solve problems and find the solution in the real life [1][2] [3]. One of the researches related to the image processing which is in line with this article is the research conducted by Siswo Wardoyo, Romi Wiryadinata, and Raya Sagita. Their title was "the eigenface-based facial recognition system through principal component analysis method". In this research, the webcam was used as a tool to capture the facial image processed by the principal component analysis method. The image processing was processing the pixels images in the form of digital images for a particular purpose [2]. Image processing was done for several reasons e.g., to obtain an original image of the degraded image due to noise or to obtain distinct and compatible images required for further phases in the image analysis [2]. The processed image was digitally transformed into a numerical representation by the computer as the processing result [2]. The method of this research used a learning vector quantization (LVQ). Learning Vector Quantization (LVQ) was the learning-based method carried out on a supervised competitive layer. A competitive layer automatically learnt to classify input vectors. The obtained classes were as a result of this competitive layer and depended only on the range among the input vectors. If two vectors were approximately equal, the competitive layer placed the input vectors into the same class [1]. LVQ was the pattern classification method on each output representing a particular category or class (some units of the output should be used for each class). The weight vector for an output was often expressed as a reference vector. It was assumed that a series of training patterns with the provided classifications was along with an initial distribution of the reference vector. After the training was finalized, the LVQ network clarified the input vectors by assigning it to the same class as the output unit; while, the reference vectors were classified as the input vectors [3]. LVQ architecture was seen as picture 2 below: The LVQ algorithm was as follows: 1. Specify: Weight (w), maximum epochs, expected minimum errors (eps), learning rate (α).

ACADEMIC DISCIPLINE And SUB-DISCIPLINES
Programming, Database

SUBJECT CLASSIFICATION
Computer Science

GENERAL DESCRIPTION OF THE SYSTEM
This research was conducted with several steps, as follows: 1. The student attendance files from which the images were take were the file consisting of a level of the clarity on color and text.
2. Scanner was used as the device used to capture the student attendance images 3. Segments of the identified student attendance images were the students' number, students' name, students' signature, sign 'X' , sign '-' or blank 4. The captured images were in the form of JPEG image file.
General framework to produce information related to the student attendance was seen in figure 2. The picture above means that the captured image was the processed images as the student attendance files in the form of paper. The tool used as the image catcher was a scanner. The image captured by the scanner was recorded and stored through JPEG file extension. According to the file, the application was able to recognize the existing pattern based on the created and set pattern from the physical attendance pattern itself. The program only recognized formatted characters in the application. The student attendance physical attendance picture was seen below.  Based on the example of the attendance images which had been taken, the image processing application for capturing the student attendance was built with the introduction to the character numbers, letters, student signatures, and blank parts of the atteandance. The characters which had been identified were 0-9, A-Z, a-z, -, x, and student signatures.

PHASES OF DATA TRAINING PROCESS AND IMAGE DETECTION
The training data process design was the design used to recognize tested letters with similar types of the letters. This process generated a new weight of the trained letters. Moreover, the image detection process was the process to find the student attendance file area based on the name and the existing signature. The description of the data training process and image detection was seen below.

RESULT OF THE IMAGE PROCESSING PROCESS
The steps for processing the image were as follows:

Data Process Training
Steps: 1. Prepare a sample of letters using font from the attendance list divided into capital letters and small letters from which 4 fonts were provided. In this research, calibri, cambria, carlito, and candara font were used.  i. Calculate the white area on the letter before it was resized; if there were more than 200, put the image into the dataset.
j. In the first set of letters, the weight dataset was used and the second and further set of letters were used as the training dataset.
k. Do LVQ process
Step of the capital/small letters: a. Read the image containing a collection of letters.
b. Modify the color to grayscale.
c. Resize the font into 1.5 times of the original size.
Modify the binary image. In this step, the result of an image was black.
d. Invert the binary image. Convert the text and background into reverted from. The text was white and the background was black.
e. Find the bounding box per letter with the find contour function.
f. Extract the letters with masks obtained in the find contour function. j. Calculate the accuracy.

Letter Detection Process
Steps: a. Find the area containing students' data and signatures Method -Find the largest rectangle area by using find contour k. Find the bounding box per letter with the find contour function