Detecting Emotions in OpenCV

Overview

Emotion detection using OpenCV typically relies on facial expression analysis by applying pre-trained classifiers or deep learning models. While OpenCV provides basic face detection (e.g., Haar cascades), emotion recognition requires additional layers like CNNs or models from frameworks such as TensorFlow, Keras, or PyTorch.

Approach

The most common approach involves:

  1. Detecting the face in the image.
  2. Extracting facial landmarks or regions.
  3. Classifying the expression using a deep learning model trained on emotion datasets like FER2013 or AffectNet.

Basic Code Example (Haar + Deep Model)


#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <iostream>

using namespace cv;
using namespace std;

int main() {
    VideoCapture cap(0);
    if (!cap.isOpened()) return -1;

    CascadeClassifier faceCascade;
    faceCascade.load("haarcascade_frontalface_default.xml");

    dnn::Net net = dnn::readNetFromONNX("emotion-ferplus.onnx");  // Use appropriate ONNX model

    vector<string> emotions = {"neutral", "happiness", "surprise", "sadness", "anger", "disgust", "fear", "contempt"};

    while (true) {
        Mat frame, gray;
        cap >> frame;
        cvtColor(frame, gray, COLOR_BGR2GRAY);

        vector<Rect> faces;
        faceCascade.detectMultiScale(gray, faces, 1.3, 5);

        for (const Rect& face : faces) {
            Mat roi = gray(face);
            resize(roi, roi, Size(64, 64));
            roi.convertTo(roi, CV_32F, 1.0 / 255);

            Mat blob = dnn::blobFromImage(roi, 1.0, Size(64, 64));
            net.setInput(blob);
            Mat prob = net.forward();

            Point classId;
            double confidence;
            minMaxLoc(prob, 0, &confidence, 0, &classId);

            string label = emotions[classId.y];
            rectangle(frame, face, Scalar(255, 0, 0), 2);
            putText(frame, label, Point(face.x, face.y - 10), FONT_HERSHEY_SIMPLEX, 0.8, Scalar(0, 255, 0), 2);
        }

        imshow("Emotion Detection", frame);
        if (waitKey(10) == 27) break;
    }

    return 0;
}
			

Note: The model used here must match the input size and output emotion class format (e.g., FER+ expects 64x64 grayscale input).

Pretrained Models

You can download models compatible with OpenCV's dnn module, for example:

Improving Accuracy

  • Use histogram equalization on grayscale ROI for better contrast.
  • Combine facial landmarks for region weighting.
  • Apply temporal smoothing across frames to reduce flickering results.