Krig Research


Visual Computing - AI - Visual Intelligence - Visual Solutions

VisualGenomesTM Background

Like the Human Genome project which catalogs all the unique DNA in an organism from strands of DNA, VisualGenomes are learned and represented as multimodal and multivariate strands of VDNA or visual features, including shape, texture, color and glyphs similar to CVNN and Transofmer-learned features.

Create strands of VisualDNATM to describe any object, collecting and cataloging all the unique features that compose the object, allowing for VolumeLearningTM .


Modeled on the human vision system. VIsual Genomes and Visual DNA are powerful and flexible, several orders of magnitude faster than current AI methods such as Transformers and CNN's. VIsual Genomes and VDNA are learned using proprietary Continuous Learning methods, vi continual sequencing against new visual data as it is supplied. Once the VDNA and Visual Genomes are sequenced (i.e. genome sequencing is like DNA sequencing), then the Agents taker over and do the actual learning and inference very quickly. So, the Agents also operate continuously as well, learning and refining the VDNA and visual genomes catalog.


Visual Genomes are multimodal features composed of VDNA, able to learn anything, an find everything.

iIn aisngle visual object, multiple VNDA are integrated together as a multimodal system with thousands of modes in each VDNA feature, compared to the monovariate single feature modes used in CNN's and Transformers. Finally, cooperative Learning Agents work together to find, track, and learn continually what you are looking for, for maximum visuasl intelligence.


VIsual Genonmes and Visual DNA are like a set of visual patterns forming a unique mathematical signature, and collectively they form a model of a visual objects,


Visual Genomes and VDNA can be represented and located by Agents using a metfic combination clasifier, a next-generation style of classification ideal for use in Agents. Ther MCC classieiers can be learned ad tuned separaetly from the VIsualGenomes model.


VIsualGenomes are available by special work contract via the Custom Shop,




Figure 5 showing the Andromeda galaxy containing ~1 trillion stars, NASA public domain image




Inquiries
Sponsors, research partners, and advisors are encouraged to apply. Contact Scott Krig for details, krigresearchinfo@protonmail.com




Join The Visual Genomes Foundation


The Visual Genomes Foundation is a public initiative to map all visual DNA (VDNA) and corresponding visual genomes into a public repository, for free, unrestricted, public and commercial use, top enable increased research, and new products to be developed for commercial use.


The goal of the foundation is to continually map video and other images, and generate the VDNAS and corresponding genomes, beginning with at least a huge number of images, for example a few trillion images to start.


Sponsors, research partners, and advisors are encouraged to apply.



Corporate Sponsors Welcome!





VisualGenomes:: one of the largest public models in the known universe.


The amount of visual information learned and stored by the VGF will be unprecedented. With only several hundred billion stars in our Milky Way Galaxy, and perhaps a few trillion stars in the largest galaxies, the Visual Genome Model will take a place among the largest models in the universe, with the initial target of 2 trillion VDNA and genomes in the model within 4 years.


The entire model will be public domain - anyone can participate without restriction.


A common visual intelligence infrastructure enables widespread profit opportunities to all.


Corporate Sponsors will have advantages and opportunities to extend the model in any direction for their private use and profit without restrictions or royalties.


VisualGenomesTM

  VisualGenomesTM 

LVFM - Large Visual Feature Model

Multiple Intelligence Models



VisualGenomesTM is the first LVFM - Large Visual Feature Model - the first of it's kind.


The goal is to catalog all visual features (i.e. VDNA or Visual DNA) as a pubic LVFM, and also create private LVFM's. VDNA Agents are a part of the VisualGenomesTM system.


Like the Human Genome project which catalogs all the unique DNA in an organism from strands of DNA, VisualGenomesTM are learned and represented as multimodal and multivariate strands of visual features, including shape, texture, color and glyphs similar to CNN and Transformer-learned features.

Create strands of VisualDNATM to describe any object, collecting and cataloging all the unique features that compose the object, allowing for VolumeLearningTM. Modeled on the human vision system, VisualGenomesTM is centered on a volume memory neuron with local computing at each neuron.


Visual genomes are like human DNA strands defining a biological object, but for visual intelligence the visual DNA are contained in strands of VisualDNATM or VDNATM, describing multimodal and multivariate features detected in visual objects.


Multiple intelligence models are neurologically specialized for several types of learning, for example visual, textual, emotional, and logical intelligence modes detectable within neurology, and are biologically plausible, unlike the misleading concept of AGI (Artificial General Intelligence) which is like "the One ring that binds them" from Tolkiens the Lord of The Rings trilogy, which cannot be demonstrated biologically. See Howard Gardner’s seminal work Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books.


Visual genomes are collected by a process called VDNA sequencing, similar to DNA sequencing as performed in the Human Genome Project. VDNA strands define regions of visual objects.


Special Partners And Sponsors Welcome!


How to get VisualGemones TM

+ Available via the Custom Shop under NDA and special work order

+ Integrated into VisionWatcher under NDA

+ Developer API available under NDA

An intro to VisualGenomes and VDNA from JR, our AI Avatar !