dNetReorder
is reorder the multiple graph colorings within a
sheet-shape rectangle grid
dNetReorder(g, data, feature = c("node", "edge"), node.normalise = c("none", "degree"), xdim = NULL, ydim = NULL, amplifier = NULL, metric = c("none", "pearson", "spearman", "kendall", "euclidean", "manhattan", "cos", "mi"), init = c("linear", "uniform", "sample"), algorithm = c("sequential", "batch"), alphaType = c("invert", "linear", "power"), neighKernel = c("gaussian", "bubble", "cutgaussian", "ep", "gamma"))
an object of class "sReorder", a list with following components:
nHex
: the total number of rectanges in the grid
xdim
: x-dimension of the grid
ydim
: y-dimension of the grid
uOrder
: the unique order/placement for each component
plane that is reordered to the "sheet"-shape grid with rectangular
lattice
coord
: a matrix of nHex x 2, with each row corresponding
to the coordinates of each "uOrder" rectangle in the 2D map grid
call
: the call that produced this result
According to which features are used and whether nodes should be penalised by degrees, the feature data are constructed differently from the input data and input graph:
The size of "sheet"-shape rectangle grid depends on the input arguments:
nHex=xdim*ydim
.
nHex=5*sqrt(dlen)
, where dlen is the
number of rows of the input data.
# 1) generate a random graph according to the ER model g <- erdos.renyi.game(100, 1/100) # 2) produce the induced subgraph only based on the nodes in query subg <- dNetInduce(g, V(g), knn=0) # 3) reorder the module with vertices being color-coded by input data nnodes <- vcount(subg) nsamples <- 10 data <- matrix(runif(nnodes*nsamples), nrow=nnodes, ncol=nsamples) rownames(data) <- V(subg)$name sReorder <- dNetReorder(g=subg, data, feature="node", node.normalise="none")Start at 2018-01-19 12:36:46 First, define topology of a map grid (2018-01-19 12:36:46)... Second, initialise the codebook matrix (36 X 12) using 'linear' initialisation, given a topology and input data (2018-01-19 12:36:46)... Third, get training at the rough stage (2018-01-19 12:36:46)... 1 out of 360 (2018-01-19 12:36:46) 36 out of 360 (2018-01-19 12:36:46) 72 out of 360 (2018-01-19 12:36:46) 108 out of 360 (2018-01-19 12:36:46) 144 out of 360 (2018-01-19 12:36:46) 180 out of 360 (2018-01-19 12:36:46) 216 out of 360 (2018-01-19 12:36:46) 252 out of 360 (2018-01-19 12:36:46) 288 out of 360 (2018-01-19 12:36:46) 324 out of 360 (2018-01-19 12:36:46) 360 out of 360 (2018-01-19 12:36:46) Fourth, get training at the finetune stage (2018-01-19 12:36:46)... 1 out of 1440 (2018-01-19 12:36:47) 144 out of 1440 (2018-01-19 12:36:47) 288 out of 1440 (2018-01-19 12:36:47) 432 out of 1440 (2018-01-19 12:36:47) 576 out of 1440 (2018-01-19 12:36:47) 720 out of 1440 (2018-01-19 12:36:47) 864 out of 1440 (2018-01-19 12:36:47) 1008 out of 1440 (2018-01-19 12:36:47) 1152 out of 1440 (2018-01-19 12:36:47) 1296 out of 1440 (2018-01-19 12:36:47) 1440 out of 1440 (2018-01-19 12:36:47) Next, identify the best-matching hexagon/rectangle for the input data (2018-01-19 12:36:47)... Finally, append the response data (hits and mqe) into the sMap object (2018-01-19 12:36:47)... Below are the summaries of the training results: dimension of input data: 10x12 xy-dimension of map grid: xdim=6, ydim=6, r=3 grid lattice: rect grid shape: sheet dimension of grid coord: 36x2 initialisation method: linear dimension of codebook matrix: 36x12 mean quantization error: 0.207972416009171 Below are the details of trainology: training algorithm: sequential alpha type: invert training neighborhood kernel: gaussian trainlength (x input data length): 36 at rough stage; 144 at finetune stage radius (at rough stage): from 1 to 1 radius (at finetune stage): from 1 to 1 End at 2018-01-19 12:36:47 Runtime in total is: 1 secs