This work seeks to present the initial steps towards developing an autonomously learned communication language among distributed agents. Symbols are anchored directly in the environment by having agents assign symbols to any distinct specimen in the environment that they are able to sense. Agents start with no previous knowledge of the environment. Agents then perform a classification and consensus routine in order to come to approximate agreement on specimen-symbol relationships. Convergence of the population to the same lexicon is examined and compared to a traditional control system where the consensus method used is analogous to a feedback controller.