Download - ObjectNet

ObjectNet is available as a 184GB zip file containing all of the images formatted as high resolution PNGs.

We also have a backup at that may occasionally run out of bandwidth.

The zip file has a password to ensure that everyone is aware of our unusual license. The password is: objectnetisatestset

Get in touch if you are using it, we would love to hear about you reserach. Feel free to reach out if you are having any difficulties using ObjectNet.

Label format

For more details on the format of the labels file see

How do I recognize if an image is in ObjectNet?

As training sets become huge, the risk that test and training sets overlap is serious. We provide ObjectNet with a 1 pixel red border around each image which must be removed before performing inference. The ObjectNet license requires that if you post images from ObjectNet to the web, you include this border. Any time you see an image with a solid 1 pixel red border, that's an indication it's in someone's test set and you should be careful about training on it. Reverse image search will allow you to figure out which test set it is from.


Plese read this section, ObjectNet has an unusual license!

ObjectNet is free to use for both research and commercial applications. The authors own the source images and allow their use under a license derived from Creative Commons Attribution 4.0 with only two additional clauses.

  • 1. ObjectNet may never be used to tune the parameters of any model.
  • 2. Any individual images from ObjectNet may only be posted to the web including their 1 pixel red border.

If you are using ObjectNet, please cite our work, the citation appears at the bottom of this page. Any derivative of ObjectNet must contain attribution as well.


Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Dan Gutfreund, Josh Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. In Advances in Neural Information Processing Systems 32, pages 9448–9458. 2019.

This work was supported, in part by, the Center for Brains, Minds and Machines, CBMM, NSFSTC award CCF-1231216, the MIT-IBM Brain-Inspired Multimedia Comprehension project, the Toyota Research Institute, and the SystemsThatLearn@CSAIL initiative.