今天在调试一个项目的时候报错缺少face-alignment模块,查询一下发现这个模块号称是世界上精度最高的检测器,瞬间来了兴致,从Git上下载下来学习一下。
仓库地址在这里。
安装方式也很简单pip即可:
pip install face-alignment
可以看到:这里官方已经给出来了不错的代码实例,方便直接去使用,对于我来讲官方给的代码实例不太能满足我的需求,因为检测到的68个关键点没有存储下来,另外检测到的关键点也没有显示在原图上并存储下来,所以这里自己做一下改进,代码实现如下所示:
#!usr/bin/env python
# encoding:utf-8
from __future__ import division
"""
__Author__: 沂水寒城
功能: 人脸关键点检测
https://github.com/1adrianb/face-alignment
"""
import os
import cv2
import json
import face_alignment
from skimage import io
def face2d(data="aflw-test.jpg", save_path="output_2d.jpg"):
"""
2D关键点检测
"""
fa = face_alignment.FaceAlignment(
face_alignment.LandmarksType._2D, flip_input=False, device="cpu"
)
inputs = io.imread(data)
preds = fa.get_landmarks(inputs)
print("type_preds: ", type(preds))
print("preds_length: ", len(preds))
frame = cv2.imread(data)
points_list = preds[0].tolist()
print("points_list_length: ", len(points_list))
for one_point in points_list:
x, y = one_point
cv2.circle(
frame, (int(x), int(y)), 2, (0, 255, 255), thickness=-1, lineType=cv2.FILLED
)
with open(save_path.split(".")[0].strip() + ".json", "w") as f:
f.write(json.dumps(points_list))
cv2.imwrite(save_path, frame)
def face3d(data="aflw-test.jpg", save_path="output_3d.jpg"):
"""
3D关键点检测
"""
fa = face_alignment.FaceAlignment(
face_alignment.LandmarksType._3D, flip_input=False, device="cpu"
)
inputs = io.imread(data)
preds = fa.get_landmarks(inputs)
print("type_preds: ", type(preds))
print("preds_length: ", len(preds))
frame = cv2.imread(data)
points_list = preds[0].tolist()
print("points_list_length: ", len(points_list))
for one_point in points_list:
x, y, _ = one_point
cv2.circle(
frame, (int(x), int(y)), 2, (0, 255, 255), thickness=-1, lineType=cv2.FILLED
)
with open(save_path.split(".")[0].strip() + ".json", "w") as f:
f.write(json.dumps(points_list))
cv2.imwrite(save_path, frame)
if __name__ == "__main__":
face2d(data="aflw-test.jpg", save_path="output_2d.jpg")
face3d(data="aflw-test.jpg", save_path="output_3d.jpg")
这里完整的将关键点结果已经对应的结果图片都进行的存储处理,方便自己后续的应用进行业务整合计算,2D和3D的实现官方封装的很好了只是一个参数的区别,这里为了直观,我将其分为了两个独立的函数。
第一次使用的时候会默认从云端下载模型,如下所示:
Downloading: "https://www.adrianbulat.com/downloads/python-fan/3DFAN4-4a694010b9.zip" to C:Users18706/.cachetorchhubcheckpoints3DFAN4-4a694010b9.zip 15%|███████████████▋ | 13.5M/91.9M [08:24<26:03, 52.6kB/s]
这里使用的样例图片如下所示:
2D检测结果如下所示:
[[143.0, 237.0], [143.0, 261.0], [149.0, 285.0], [152.0, 306.0], [155.0, 327.0], [164.0, 342.0], [173.0, 351.0], [185.0, 354.0], [212.0, 360.0], [242.0, 360.0], [263.0, 357.0], [281.0, 351.0], [299.0, 339.0], [311.0, 321.0], [317.0, 300.0], [326.0, 279.0], [332.0, 255.0], [158.0, 207.0], [164.0, 198.0], [179.0, 198.0], [191.0, 198.0], [200.0, 204.0], [245.0, 204.0], [257.0, 201.0], [272.0, 201.0], [287.0, 207.0], [302.0, 216.0], [221.0, 225.0], [221.0, 240.0], [218.0, 252.0], [215.0, 261.0], [203.0, 276.0], [209.0, 276.0], [218.0, 279.0], [227.0, 276.0], [233.0, 276.0], [173.0, 228.0], [179.0, 222.0], [191.0, 222.0], [200.0, 231.0], [191.0, 231.0], [179.0, 231.0], [248.0, 231.0], [257.0, 225.0], [269.0, 225.0], [281.0, 234.0], [269.0, 237.0], [257.0, 237.0], [185.0, 306.0], [194.0, 297.0], [209.0, 291.0], [215.0, 291.0], [224.0, 291.0], [239.0, 300.0], [248.0, 309.0], [236.0, 312.0], [224.0, 315.0], [215.0, 315.0], [203.0, 315.0], [194.0, 312.0], [188.0, 306.0], [206.0, 300.0], [215.0, 300.0], [227.0, 300.0], [248.0, 309.0], [224.0, 303.0], [215.0, 303.0], [206.0, 303.0]]
可视化结果如下所示:
3D检测结果如下所示:
[[137.0, 240.0, -85.92153930664062], [140.0, 264.0, -81.1578140258789], [143.0, 288.0, -76.27053833007812], [146.0, 306.0, -69.03135681152344], [152.0, 327.0, -53.79359817504883], [161.0, 342.0, -30.05421257019043], [170.0, 348.0, -2.8260068893432617], [185.0, 354.0, 23.47999382019043], [212.0, 360.0, 38.62155532836914], [239.0, 357.0, 31.71284294128418], [263.0, 354.0, 12.164687156677246], [284.0, 348.0, -10.076691627502441], [302.0, 333.0, -29.4429931640625], [314.0, 315.0, -41.68869400024414], [320.0, 297.0, -46.93766784667969], [326.0, 276.0, -50.34530258178711], [335.0, 252.0, -53.96144104003906], [152.0, 207.0, -7.665694713592529], [164.0, 201.0, 6.131765842437744], [176.0, 198.0, 16.931467056274414], [188.0, 198.0, 24.6306095123291], [200.0, 201.0, 29.18903923034668], [245.0, 204.0, 37.82160568237305], [257.0, 201.0, 37.36283874511719], [269.0, 201.0, 34.105506896972656], [284.0, 204.0, 28.42513656616211], [299.0, 216.0, 18.271329879760742], [221.0, 225.0, 37.87166976928711], [218.0, 237.0, 48.27630615234375], [215.0, 249.0, 60.441802978515625], [215.0, 261.0, 63.29471969604492], [203.0, 273.0, 40.13742446899414], [209.0, 276.0, 45.007755279541016], [218.0, 276.0, 48.51951599121094], [227.0, 276.0, 47.70017623901367], [233.0, 276.0, 44.97114181518555], [170.0, 228.0, 7.120547771453857], [179.0, 222.0, 17.11659812927246], [188.0, 222.0, 19.724153518676758], [200.0, 228.0, 19.01317024230957], [191.0, 231.0, 20.5870418548584], [179.0, 231.0, 16.077821731567383], [248.0, 231.0, 28.518339157104492], [257.0, 225.0, 32.973506927490234], [269.0, 225.0, 34.33488082885742], [278.0, 231.0, 26.97059440612793], [269.0, 234.0, 32.821327209472656], [257.0, 234.0, 33.2925910949707], [185.0, 306.0, 29.884431838989258], [194.0, 297.0, 42.56733703613281], [209.0, 291.0, 50.519901275634766], [215.0, 291.0, 52.7900276184082], [221.0, 291.0, 52.88304901123047], [236.0, 300.0, 48.29838562011719], [248.0, 309.0, 38.22211456298828], [236.0, 312.0, 48.35514831542969], [224.0, 315.0, 52.609046936035156], [212.0, 315.0, 52.29711151123047], [203.0, 315.0, 49.51622009277344], [194.0, 309.0, 42.60727310180664], [188.0, 303.0, 30.705596923828125], [206.0, 300.0, 46.46957778930664], [215.0, 300.0, 49.569786071777344], [224.0, 300.0, 49.022953033447266], [248.0, 309.0, 38.07121276855469], [224.0, 303.0, 49.78551483154297], [215.0, 303.0, 49.560916900634766], [206.0, 303.0, 47.098392486572266]]
可视化结果如下所示:
到这里本文的主要实践就结束了,感兴趣的话可以自己动手试试看。



