Presented is an implementation of a time sychronous middleware for Python ACT-R and the open-source robotics simulator, MORSE (Echeverria et al., 2012; Echeverria, Lassabe, Degroote, & Lemaignan, 2011), a novel vision system, and novel motor system, which I collectively call ACT-R 3D. A new 3D camera and a crude body-model robot was added to the MORSE system to facilitate modeling of afforance-based research on aperture passage (walking through apertures and rotating shoulders as needed). Presented as a proof of concept are three affordance models based in ACT-R. The models tests a novel theory, the Theory of Geometric Affordances, that proposes that humans make geometric comparisons between apertures (depth, width, height) and stored representations of body postures (body schema). Both models are individually qualitatively compared against human performance for overall shoulder rotation while walking through apertures of various widths (Warren & Whang, 1987; Higuchi, Seya, & Imanaka, 2012) and overall safety margin while passing through apertures (Higuchi et al., 2012). The second model (Model 2) shows the best performance, with the same model exhibiting rotation similar to human performance across both experiments. Model 2 supports the conclusion that an abstract geometric comparison mechanism is sufficient to support aperture passage judgment without the use of semantically-laden labels. This is the first known affordance model, modeled in a computational cognitive architecture, to match preliminary human performance data.