{"id":1009,"date":"2020-07-02T17:31:00","date_gmt":"2020-07-02T08:31:00","guid":{"rendered":"https:\/\/arithmer.blog\/?p=1009"},"modified":"2022-03-08T15:43:39","modified_gmt":"2022-03-08T06:43:39","slug":"survey-on-self-supervised-image-segmentation","status":"publish","type":"post","link":"https:\/\/arithmer.blog\/blog\/survey-on-self-supervised-image-segmentation","title":{"rendered":"\u81ea\u5df1\u6559\u5e2b\u3042\u308a\u5b66\u7fd2\u306e\u753b\u50cf\u30bb\u30b0\u30e1\u30f3\u30c6\u30fc\u30b7\u30e7\u30f3\u3078\u306e\u9069\u7528"},"content":{"rendered":"\n<p class=\"has-small-font-size\">\u672c\u8cc7\u6599\u306f2020\u5e747\u67082\u65e5\u306b\u793e\u5185\u5171\u6709\u8cc7\u6599\u3068\u3057\u3066\u5c55\u958b\u3057\u3066\u3044\u305f\u3082\u306e\u3092WEB\u30da\u30fc\u30b8\u5411\u3051\u306b\u30ea\u30cb\u30e5\u30fc\u30a2\u30eb\u3057\u305f\u5185\u5bb9\u306b\u306a\u308a\u307e\u3059\u3002<\/p>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"agenda\"><strong>\u25a0Agenda<\/strong><\/h3>\n\n\n\n<p id=\"introduction\" style=\"font-size:18px\"><strong>Introduction<\/strong><\/p>\n\n\n\n<ul style=\"font-size:16px\"><li>Image Segmentation<\/li><li>Problems<\/li><li>Self-supervised learning<\/li><li>Content of this presentation<\/li><\/ul>\n\n\n\n<p style=\"font-size:18px\"><strong>Survey<\/strong><\/p>\n\n\n\n<ul style=\"font-size:16px\"><li>Pseudo Labeling<\/li><li>Class Activation Map<\/li><li>Image Depth Information<\/li><li>How about using videos<\/li><\/ul>\n\n\n\n<p style=\"font-size:18px\"><strong>Some comments<\/strong><\/p>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"image-segmentation\"><strong>\u25a0Image Segmentation<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"375\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_01.jpg\" alt=\"\" class=\"wp-image-1060\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_01.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_01-300x110.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_01-768x281.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_01-304x111.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"problems\"><strong>\u25a0Problems<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"373\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_02.jpg\" alt=\"\" class=\"wp-image-1061\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_02.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_02-300x109.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_02-768x280.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_02-304x111.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<ul style=\"font-size:16px\"><li><strong>very hard to annotate (=expensive)<\/strong><ul><li>pixel-wise annotation<\/li><li>different types of objects<\/li><li>boundaries are often blurry<\/li><\/ul><\/li><li><strong>easy to make mistake<\/strong><ul><li>lots of unclear cases<\/li><li>Need to keep focus for long time<\/li><\/ul><\/li><\/ul>\n\n\n\n<p style=\"font-size:16px\">How can we change this situation?<\/p>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"self-supervised-learning\"><strong>\u25a0Self-supervised learning<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"377\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_03.jpg\" alt=\"\" class=\"wp-image-1062\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_03.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_03-300x110.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_03-768x283.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_03-304x112.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<ul style=\"font-size:16px\"><li>Research topic pushed by \u201cgodfathers of AI\u201d and specially by Yann Le Cun<\/li><li>Pre-train a feature extractor in an unsupervised way then train the classifier with annotated data<\/li><li>Reach SOTA with only 10% of labels in the last papers (starting to go beyond)<\/li><li>Methods coming from NLP<\/li><\/ul>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"content-of-this-presentation\"><strong>\u25a0Content of this presentation<\/strong><\/h3>\n\n\n\n<ul style=\"font-size:16px\"><li><strong>3 very different directions<\/strong><ul><li>Using pseudo labelling<\/li><li>Using Activation Map (like grad-CAM)<\/li><li>Using depth information<\/li><li>Leverage frames in videos<\/li><\/ul><\/li><\/ul>\n\n\n\n<p style=\"font-size:16px\">Spoiler: there is no self-supervised learning semantic segmentation method that is very convincing\u2026<\/p>\n\n\n\n<ul style=\"font-size:16px\"><li><strong>High level explanations<\/strong><\/li><li><strong>For more details, I provided notes for some papers<\/strong><\/li><li><strong>For even more details, please read the original papers<\/strong><\/li><\/ul>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"pseudo-labeling\"><strong>\u25a0Pseudo Labeling<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"325\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_04.jpg\" alt=\"\" class=\"wp-image-1063\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_04.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_04-300x95.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_04-768x244.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_04-304x96.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<ul style=\"font-size:16px\"><li>Usually use softmax to differentiate \u201cstrong prediction\u201d to \u201cweak prediction\u201d<\/li><li>Keep only \u201cstrong prediction\u201d as pseudo-labels<\/li><li>popular research topic<\/li><\/ul>\n\n\n\n<p style=\"font-size:18px\"><strong>Issue<\/strong><\/p>\n\n\n\n<ul style=\"font-size:16px\"><li>Softmax is not the best way since it takes the best score without considering other scores <br>(which might be promising also)<\/li><\/ul>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"pseudo-labeling-entropy-guided\"><strong>\u25a0Pseudo Labeling: Entropy-guided<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"318\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_05.jpg\" alt=\"\" class=\"wp-image-1064\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_05.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_05-300x93.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_05-768x239.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_05-304x94.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<p class=\"has-text-align-center has-small-font-size\">My notes <br>https:\/\/arithmer.co.jp\/wp-content\/uploads\/pdf\/notes_ESL_Entropy-guided_Self-supervised_Learning_for_Domain_Adaptation_in_Semantic_Segmentation.pdf <br>Original paper <br>https:\/\/arxiv.org\/abs\/2006.08658v1 <br>Repo <br>https:\/\/github.com\/liyunsheng13\/BDL<\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"375\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_06.jpg\" alt=\"\" class=\"wp-image-1065\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_06.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_06-300x110.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_06-768x281.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_06-304x111.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<p style=\"font-size:16px\">Model Pre-trained on GTA5 dataset and method used on Cityscapes: <br>consistently improves the final accuracy but by less than 1% mIoU State-of-the-art for fully supervised learning: 85.1% (IT = image translation)<\/p>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"class-activation-map\"><strong>\u25a0Class Activation Map<\/strong><\/h3>\n\n\n\n<ul style=\"font-size:16px\"><li>Uses gradient information flow to the neurons of a specific CNN layer to identify regions of activation <\/li><li>Step forward interpretability <\/li><li>Work from 2017 (last version from last December)<\/li><\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"266\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_07.jpg\" alt=\"\" class=\"wp-image-1066\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_07.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_07-300x78.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_07-768x200.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_07-304x79.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<p class=\"has-text-align-center has-small-font-size\">original paper: https:\/\/arxiv.org\/abs\/1610.02391<\/p>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"class-activation-map-equivariant-attention-mechanism\"><strong>\u25a0Class Activation Map: Equivariant Attention Mechanism<\/strong><\/h3>\n\n\n\n<ol><li>Need image level annotation (class) <\/li><li>3 loss functions combines (ECR, ER and cls) <ul><li><strong>a.<\/strong> both CAM outputs should be similar despite Affine transform <\/li><li><strong>b. <\/strong>But CAM degenerates (converge to a trivial solution) so ECR regularizes the PCM outputs with the original CAM<\/li><\/ul><\/li><\/ol>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"279\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_08.jpg\" alt=\"\" class=\"wp-image-1067\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_08.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_08-300x82.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_08-768x209.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_08-304x83.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<p class=\"has-text-align-center has-small-font-size\">My notes<br>https:\/\/arithmer.co.jp\/wp-content\/uploads\/pdf\/notes_Self-supervised_Equivariant_Attention_Mechanism_for_Weakly_Supervised_Semantic_Segmentation.pdf<br>Original paper <br><a rel=\"noreferrer noopener\" href=\"https:\/\/arxiv.org\/abs\/2004.04581\" target=\"_blank\">https:\/\/arxiv.org\/abs\/2004.04581<\/a><br>Repo <br>https:\/\/github.com\/YudeWang\/SEAM<\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p style=\"font-size:18px\"><strong>Main contribution: PCM module<\/strong><\/p>\n\n\n\n<ul style=\"font-size:16px\"><li>uses attention to refine the mask from grad-CAM<\/li><li>attention can capture contextual information<\/li><\/ul>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"194\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_09.jpg\" alt=\"\" class=\"wp-image-1068\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_09.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_09-300x57.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_09-768x146.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_09-304x58.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"class-activation-map-experiments\"><strong>\u25a0Class Activation Map: Experiments<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"416\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_10.jpg\" alt=\"\" class=\"wp-image-1069\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_10.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_10-300x122.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_10-768x312.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_10-304x124.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"image-depth-information-hn-labels\"><strong>\u25a0Image Depth Information: HN labels<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"334\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_11.jpg\" alt=\"\" class=\"wp-image-1055\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_11.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_11-300x98.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_11-768x251.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_11-304x99.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<ul style=\"font-size:16px\"><li>HN labels generation<ul><li>Computing angles and height relative to the floor plane using RGB-D<\/li><li>Everything is binned to create labels<\/li><\/ul><\/li><li>Training segmentation model on HN-labels<\/li><li>Fine-tuning on real dataset<\/li><\/ul>\n\n\n\n<p style=\"font-size:16px\">The point is more to show:pretrained HN labels &gt;&gt; pretrained ImageNet<\/p>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"image-depth-information-experiments\"><strong>\u25a0Image Depth Information: Experiments<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"217\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_12.jpg\" alt=\"\" class=\"wp-image-1056\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_12.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_12-300x64.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_12-768x163.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_12-304x64.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<p>dataset: NUYv2, trained on 50 epochs only. HN labels are from NUYv2. ImageNet is 25x bigger<\/p>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"how-about-using-videos-self-supervised-video-object-segmentation\"><strong>\u25a0How about using videos: Self-supervised Video Object Segmentation<\/strong><\/h3>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"375\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_13.jpg\" alt=\"\" class=\"wp-image-1057\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_13.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_13-300x110.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_13-768x281.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_13-304x111.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<ul style=\"font-size:16px\"><li>Learn unsupervised to track similar pixels across frames<\/li><li>Then when giving an initial mask (on this illustration, in the frame 0), the model can infer the same object for the next frames<\/li><\/ul>\n\n\n\n<p class=\"has-text-align-center has-small-font-size\">Original paper: https:\/\/arxiv.org\/abs\/2006.12480v1<\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" width=\"1024\" height=\"376\" src=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_14.jpg\" alt=\"\" class=\"wp-image-1058\" srcset=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_14.jpg 1024w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_14-300x110.jpg 300w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_14-768x282.jpg 768w, https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/NS20200702_14-304x112.jpg 304w\" sizes=\"(max-width: 1024px) 100vw, 1024px\"><\/figure>\n\n\n\n<ul style=\"font-size:16px\"><li>Learn unsupervised to track similar pixels across frames<\/li><li>Then when giving an initial mask (on this illustration, in the frame 0), the model can infer the same object for the next frames<\/li><\/ul>\n\n\n\n<p class=\"has-text-align-center has-small-font-size\">Original paper: https:\/\/arxiv.org\/abs\/2006.12480v1<\/p>\n\n\n\n<h3 class=\"has-medium-font-size wp-block-heading\" id=\"\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\"><strong>\u25a0\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9<\/strong><\/h3>\n\n\n\n<p style=\"font-size:16px\"><a href=\"https:\/\/arithmer.blog\/wp-content\/uploads\/2022\/02\/06_\u81ea\u5df1\u6559\u5e2b\u3042\u308a\u5b66\u7fd2\u306e\u753b\u50cf\u30bb\u30b0\u30e1\u30f3\u30c6\u30fc\u30b7\u30e7\u30f3\u3078\u306e\u9069\u7528.pdf\" target=\"_blank\" rel=\"noreferrer noopener\">\u81ea\u5df1\u6559\u5e2b\u3042\u308a\u5b66\u7fd2\u306e\u753b\u50cf\u30bb\u30b0\u30e1\u30f3\u30c6\u30fc\u30b7\u30e7\u30f3\u3078\u306e\u9069\u7528.pdf<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u672c\u8cc7\u6599\u306f2020\u5e747\u67082\u65e5\u306b\u793e\u5185\u5171\u6709\u8cc7\u6599\u3068\u3057\u3066\u5c55\u958b\u3057\u3066\u3044\u305f\u3082\u306e\u3092WEB\u30da\u30fc\u30b8\u5411\u3051\u306b\u30ea\u30cb\u30e5\u30fc\u30a2\u30eb\u3057\u305f\u5185\u5bb9\u306b\u306a\u308a\u307e\u3059\u3002 \u25a0Agenda Introduction Image Segmentation Problems Sel &#8230; <\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13],"tags":[20,80,35,24,45,89,36],"_links":{"self":[{"href":"https:\/\/arithmer.blog\/index.php?rest_route=\/wp\/v2\/posts\/1009"}],"collection":[{"href":"https:\/\/arithmer.blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/arithmer.blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/arithmer.blog\/index.php?rest_route=\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/arithmer.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1009"}],"version-history":[{"count":6,"href":"https:\/\/arithmer.blog\/index.php?rest_route=\/wp\/v2\/posts\/1009\/revisions"}],"predecessor-version":[{"id":1071,"href":"https:\/\/arithmer.blog\/index.php?rest_route=\/wp\/v2\/posts\/1009\/revisions\/1071"}],"wp:attachment":[{"href":"https:\/\/arithmer.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1009"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/arithmer.blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1009"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/arithmer.blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1009"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}