This document describes a new method for object detection in autonomous vehicles using both LiDAR and image data. The method generates 2D object proposals from LiDAR point cloud data by filtering and grouping edge points. These proposals are then classified using an R-FCN neural network. The class labels are mapped back to the 3D LiDAR points to determine the full 3D bounding box orientations of detected objects. This allows for spatial information and handling of occluded regions, improving over methods using image data alone. The method is evaluated on the KITTI dataset and shown to achieve accurate and fast object detection suitable for autonomous driving applications.