hyperpose.Model.pose_proposal package¶
Submodules¶
hyperpose.Model.pose_proposal.define module¶
-
class
hyperpose.Model.pose_proposal.define.
CocoPart
(value)¶ Bases:
enum.Enum
An enumeration.
-
Instance
= 1¶
-
LAnkle
= 13¶
-
LEar
= 17¶
-
LElbow
= 6¶
-
LEye
= 15¶
-
LHip
= 11¶
-
LKnee
= 12¶
-
LShoulder
= 5¶
-
LWrist
= 7¶
-
Nose
= 0¶
-
RAnkle
= 10¶
-
REar
= 16¶
-
RElbow
= 3¶
-
REye
= 14¶
-
RHip
= 8¶
-
RKnee
= 9¶
-
RShoulder
= 2¶
-
RWrist
= 4¶
-
-
class
hyperpose.Model.pose_proposal.define.
MpiiPart
(value)¶ Bases:
enum.Enum
An enumeration.
-
Center
= 14¶
-
Headtop
= 0¶
-
Instance
= 15¶
-
LAnkle
= 13¶
-
LElbow
= 6¶
-
LHip
= 11¶
-
LKnee
= 12¶
-
LShoulder
= 5¶
-
LWrist
= 7¶
-
Neck
= 1¶
-
RAnkle
= 10¶
-
RElbow
= 3¶
-
RHip
= 8¶
-
RKnee
= 9¶
-
RShoulder
= 2¶
-
RWrist
= 4¶
-
-
hyperpose.Model.pose_proposal.define.
get_coco_flip_list
()¶
-
hyperpose.Model.pose_proposal.define.
get_mpii_flip_list
()¶
hyperpose.Model.pose_proposal.eval module¶
-
hyperpose.Model.pose_proposal.eval.
evaluate
(model, dataset, config, vis_num=30, total_eval_num=10000, enable_multiscale_search=False)¶ evaluate pipeline of poseProposal class models
input model and dataset, the evaluate pipeline will start automaticly the evaluate pipeline will: 1.loading newest model at path ./save_dir/model_name/model_dir/newest_model.npz 2.perform inference and parsing over the chosen evaluate dataset 3.visualize model output in evaluation in directory ./save_dir/model_name/eval_vis_dir 4.output model metrics by calling dataset.official_eval()
- Parameters
- arg1tensorlayer.models.MODEL
a preset or user defined model object, obtained by Model.get_model() function
- arg2dataset
a constructed dataset object, obtained by Dataset.get_dataset() function
- arg3Int
an Integer indicates how many model output should be visualized
- arg4Int
an Integer indicates how many images should be evaluated
- Returns
- None
-
hyperpose.Model.pose_proposal.eval.
infer_one_img
(model, postprocessor, img, img_id=- 1, is_visual=False, save_dir='./vis_dir/pose_proposal')¶
-
hyperpose.Model.pose_proposal.eval.
test
(model, dataset, config, vis_num=30, total_test_num=10000, enable_multiscale_search=False)¶ evaluate pipeline of poseProposal class models
input model and dataset, the evaluate pipeline will start automaticly the evaluate pipeline will: 1.loading newest model at path ./save_dir/model_name/model_dir/newest_model.npz 2.perform inference and parsing over the chosen evaluate dataset 3.visualize model output in evaluation in directory ./save_dir/model_name/eval_vis_dir 4.output model metrics by calling dataset.official_eval()
- Parameters
- arg1tensorlayer.models.MODEL
a preset or user defined model object, obtained by Model.get_model() function
- arg2dataset
a constructed dataset object, obtained by Dataset.get_dataset() function
- arg3Int
an Integer indicates how many model output should be visualized
- arg4Int
an Integer indicates how many images should be evaluated
- Returns
- None
-
hyperpose.Model.pose_proposal.eval.
visualize
(img, img_id, humans, predicts, hnei, wnei, hout, wout, limbs, save_dir)¶
hyperpose.Model.pose_proposal.infer module¶
hyperpose.Model.pose_proposal.model module¶
hyperpose.Model.pose_proposal.train module¶
-
hyperpose.Model.pose_proposal.train.
get_paramed_map_fn
(augmentor, preprocessor, data_format='channels_first')¶
-
hyperpose.Model.pose_proposal.train.
parallel_train
(train_model, dataset, config)¶ Parallel train pipeline of PoseProposal class models
input model and dataset, the train pipeline will start automaticly the train pipeline will: 1.store and restore ckpt in directory ./save_dir/model_name/model_dir 2.log loss information in directory ./save_dir/model_name/log.txt 3.visualize model output periodly during training in directory ./save_dir/model_name/train_vis_dir the newest model is at path ./save_dir/model_name/model_dir/newest_model.npz
- Parameters
- arg1tensorlayer.models.MODEL
a preset or user defined model object, obtained by Model.get_model() function
- arg2dataset
a constructed dataset object, obtained by Dataset.get_dataset() function
- Returns
- None
-
hyperpose.Model.pose_proposal.train.
regulize_loss
(target_model, weight_decay_factor)¶
-
hyperpose.Model.pose_proposal.train.
single_train
(train_model, dataset, config)¶ Single train pipeline of PoseProposal class models
input model and dataset, the train pipeline will start automaticly the train pipeline will: 1.store and restore ckpt in directory ./save_dir/model_name/model_dir 2.log loss information in directory ./save_dir/model_name/log.txt 3.visualize model output periodly during training in directory ./save_dir/model_name/train_vis_dir the newest model is at path ./save_dir/model_name/model_dir/newest_model.npz
- Parameters
- arg1tensorlayer.models.MODEL
a preset or user defined model object, obtained by Model.get_model() function
- arg2dataset
a constructed dataset object, obtained by Dataset.get_dataset() function
- Returns
- None
hyperpose.Model.pose_proposal.utils module¶
-
hyperpose.Model.pose_proposal.utils.
cal_iou
(bbx1, bbx2)¶
-
hyperpose.Model.pose_proposal.utils.
draw_bbx
(img, img_pc, rx, ry, rw, rh, threshold=0.7)¶
-
hyperpose.Model.pose_proposal.utils.
draw_edge
(img, img_e, rx, ry, rw, rh, hnei, wnei, hout, wout, limbs, threshold=0.7)¶
-
hyperpose.Model.pose_proposal.utils.
draw_results
(img, predicts, targets, parts, limbs, save_dir, threshold=0.3, name='', is_train=True, data_format='channels_first')¶
-
hyperpose.Model.pose_proposal.utils.
get_colors
(dataset_type)¶
-
hyperpose.Model.pose_proposal.utils.
get_flip_list
(dataset_type)¶
-
hyperpose.Model.pose_proposal.utils.
get_limbs
(dataset_type)¶
-
hyperpose.Model.pose_proposal.utils.
get_parts
(dataset_type)¶
-
hyperpose.Model.pose_proposal.utils.
get_pose_proposals
(kpts_list, bbxs, hin, win, hout, wout, hnei, wnei, parts, limbs, img_mask=None, data_format='channels_first')¶
-
hyperpose.Model.pose_proposal.utils.
non_maximium_supress
(bbxs, scores, thres)¶
-
hyperpose.Model.pose_proposal.utils.
postprocess
(predicts, parts, limbs, data_format='channels_first', colors=None)¶ postprocess function of poseproposal class models
take model predicted feature maps of delta,tx,ty,tw,th,te,te_mask, output parsed human objects, each one contains all detected keypoints of the person
- Parameters
- arg1list
a list of model output: delta,tx,ty,tw,th,te,te_mask delta: keypoint confidence feature map, shape [C,H,W](channels_first) or [H,W,C](channels_last) tx: keypoints bbx center x coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) ty: keypoints bbx center y coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) tw: keypoints bbxs width w, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) th: keypoints bbxs height h, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) te: edge confidence feature map, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last) te_mask: mask of edge confidence feature map, used for loss caculation, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last)
- arg2: Config.DATA
a enum value of enum class Config.DATA dataset_type where the input annotation list from, because to generate correct conf_map and paf_map, the order of keypoints and limbs should be awared.
- arg3string
data format speficied for channel order available input: ‘channels_first’: data_shape C*H*W ‘channels_last’: data_shape H*W*C
- Returns
- list
contain object of humans,see Model.Human for detail information of Human object
-
hyperpose.Model.pose_proposal.utils.
preprocess
(annos, bbxs, model_hin, modeL_win, model_hout, model_wout, model_hnei, model_wnei, parts, limbs, data_format='channels_first')¶ preprocess function of poseproposal class models
take keypoints annotations, bounding boxs annotatiosn, model input height and width, model limbs neighbor area height, model limbs neighbor area width and dataset type return the constructed targets of delta,tx,ty,tw,th,te,te_mask
- Parameters
- arg1list
a list of keypoint annotations, each annotation is a list of keypoints that belongs to a person, each keypoint follows the format (x,y), and x<0 or y<0 if the keypoint is not visible or not annotated. the annotations must from a known dataset_type, other wise the keypoint and limbs order will not be correct.
- arg2list
a list of bounding box annotations, each bounding box is of format [x,y,w,h]
- arg3Int
height of the model input
- arg4Int
width of the model input
- arg5Int
height of the model output
- arg6Int
width of the model output
- arg7Int
model limbs neighbor area height, determine the neighbor area to macth limbs, see pose propsal paper for detail information
- arg8Int
model limbs neighbor area width, determine the neighbor area to macth limbs, see pose propsal paper for detail information
- arg9Config.DATA
a enum value of enum class Config.DATA dataset_type where the input annotation list from, because to generate correct conf_map and paf_map, the order of keypoints and limbs should be awared.
- arg10string
data format speficied for channel order available input: ‘channels_first’: data_shape C*H*W ‘channels_last’: data_shape H*W*C
- Returns
- list
including 7 elements delta: keypoint confidence feature map, shape [C,H,W](channels_first) or [H,W,C](channels_last) tx: keypoints bbx center x coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) ty: keypoints bbx center y coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) tw: keypoints bbxs width w, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) th: keypoints bbxs height h, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) te: edge confidence feature map, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last) te_mask: mask of edge confidence feature map, used for loss caculation, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last)
-
hyperpose.Model.pose_proposal.utils.
restore_coor
(x, y, w, h, win, hin, wout, hout, data_format='channels_first')¶
-
hyperpose.Model.pose_proposal.utils.
visualize
(img, predicts, parts, limbs, save_name='bbxs', save_dir='./save_dir/vis_dir', data_format='channels_first', save_tofile=True)¶ visualize function of poseproposal class models
take model predicted feature maps of delta,tx,ty,tw,th,te,te_mask, output visualized image. the image will be saved at ‘save_dir’/’save_name’_visualize.png
- Parameters
- arg1numpy array
image
- arg2list
a list of model output: delta,tx,ty,tw,th,te,te_mask delta: keypoint confidence feature map, shape [C,H,W](channels_first) or [H,W,C](channels_last) tx: keypoints bbx center x coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) ty: keypoints bbx center y coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) tw: keypoints bbxs width w, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) th: keypoints bbxs height h, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) te: edge confidence feature map, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last) te_mask: mask of edge confidence feature map, used for loss caculation, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last)
- arg3: Config.DATA
a enum value of enum class Config.DATA dataset_type where the input annotation list from
- arg4String
specify output image name to distinguish.
- arg5String
specify which directory to save the visualized image.
- arg6string
data format speficied for channel order available input: ‘channels_first’: data_shape C*H*W ‘channels_last’: data_shape H*W*C
- Returns
- None