You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

HUI360 : A 360° Egocentric Dataset and Baselines for Human-Robot Interaction Anticipation

A dataset to predict human-robot Interaction. Under review.

Each file corresponds to a recording and each line corresponds to one detection of one track. In each recording there are several episodes with contiguous time (no detection were found in-between episodes so data was discarded).

Tracks have a unique ID (among a file) so you may extract all the detections for a single track using unique_track_identifier :

track_data = df[df["unique_track_identifier"] == "2022_09_21_astor_place_landfill_0000_0"]
  • For each detection columns xmin, xmax, ymin, ymax indicates the bounding box in pixel coordinates (note that the images are equirectangular with size 3840x1920 but bounding boxes may appear with xmin > 3840 which indicates a wrapping around : xmin = 4240 is equivalent to xmin = 400)
  • For each detection columns sapiens_308_[JOINTNAME]_[x,y,score] contains the pixel coordinates and confidence for detections using Sapiens with Goliath 308 keypoints format
  • For each detection columns vitpose_[JOINTNAME]_[x,y,score] contains the pixel coordinates and confidence for detections using VitPose with COCO 17 keypoints format
  • The mask_rle column contains the RLE encoded binary mask of the person in the image. RLE encoding / decoding functions :
import torch

def encode_RLE(mask):
    """
    Encode a mask into a RLE.
    Args:
        mask: torch.bool [H, W]
    Returns:
        runs: torch.tensor [N] - run lengths
    """
    flat = mask.flatten() # [H*W]
    
    if flat.numel() == 0:
        return torch.tensor([], device=flat.device), False
    
    starts_with_true = flat[0].item()
    
    # Find transitions between True/False
    # Add dummy values at start and end to handle boundaries
    padded = torch.cat([torch.tensor([not flat[0]], device=flat.device), flat, torch.tensor([not flat[-1]], device=flat.device)])
    
    # Find where values change
    transitions = torch.nonzero(padded[1:] != padded[:-1], as_tuple=False).flatten()
    
    # Calculate run lengths
    runs = torch.diff(transitions)
    
    # append a 1 if starts_with_true else a 0 so that we don't have to return starts_with_true
    if starts_with_true:
        runs = torch.cat([torch.tensor([1], device=runs.device), runs])
    else:
        runs = torch.cat([torch.tensor([0], device=runs.device), runs])
        
    return runs

def decode_RLE(runs, shape):
    """
    Decode a RLE into a mask.
    Args:
        runs: torch.tensor [N] - run lengths
        shape: tuple - shape to reshape result to
    Returns:
        mask: torch.bool [H, W]
    """
    
    start_with_true = runs[0].item()
    runs = runs[1:]
    
    if runs.numel() == 0:
        return torch.zeros(shape, dtype=torch.bool, device=runs.device)
    
    # Create alternating pattern: start_with_true determines first value
    start_val = 1 if start_with_true else 0
    vals = (torch.arange(runs.numel(), device=runs.device) + start_val) % 2
    
    # Expand runs into full sequence
    expanded = torch.repeat_interleave(vals, runs).bool()
    
    # Reshape to target shape
    total_elements = shape[0] * shape[1] if len(shape) == 2 else shape[0]
    if expanded.numel() != total_elements:
        # Pad or truncate if needed
        if expanded.numel() < total_elements:
            padding = torch.zeros(total_elements - expanded.numel(), dtype=torch.bool, device=runs.device)
            expanded = torch.cat([expanded, padding])
        else:
            expanded = expanded[:total_elements]
    
    mask_dec = expanded.view(shape)
    return mask_dec
``
Downloads last month
7