End-to-End Object Detection with Transformers

By Nicolas Carion, Francisco Massa, Gabriel Synnaeve et al (Facebook AI Research), 2020

This paper describes a completely automated end-to-end object detection system combining convolutional networks and Transformers. The new model shows competitive performance on par with Faster R-CNN, and can be generalized to other tasks such as panoptic segmentation.

(more…)

Continue Reading

Supervised Contrastive Learning

By Prannay Khosla, Piotr Teterwak, Chen Wang et al (Google Research), 2020

The authors use contrastive loss, which has recently been shown to be very effective at learning deep neural network representations in the self-supervised setting, for supervised learning, and achieve better results than those obtained with cross entropy loss for ResNet-50 and ResNet-200.

(more…)

Continue Reading

ResNeSt: Split-Attention Networks

By Hang Zhang1, Chongruo Wu2, Zhongyue Zhang1 et al (1Amazon and 2UC Davis), 2020

The authors suggest a new ResNet-like network architecture that incorporates attention across groups of feature maps. Compared to previous attention models such as SENet and SKNet, the new attention block applies the squeeze-and-attention operation separately to each of the selected groups, which is done in a computationally efficient way and implemented in a simple modular structure.

(more…)

Continue Reading

Compounding the Performance Improvements of Assembled Techniques in a Convolutional Neural Network

By Jungkyu Lee, Taeryun Won, and Kiho Hong, Clova Vision, NAVER Corp, 2019

A great review of many state-of-the-art tricks that can be used to improve the performance of a deep convolutional network (ResNet), combined with actual implementation details, source code, and performance results. A must read for all Kaggle competitors or anyone who wants to achieve maximum performance on computer vision tasks.

(more…)

Continue Reading

No more content

No more pages to load