Skip to main content

Tensorgirl's workspace

Tables
1
1140662
1906.08476
PointNLM: Point Nonlocal-Means for vegetation segmentation based on
  middle echo point clouds
  Middle-echo, which covers one or a few corresponding points, is a specific
type of 3D point cloud acquired by a multi-echo laser scanner. In this paper,
we propose a novel approach for automatic segmentation of trees that leverages
middle-echo information from LiDAR point clouds. First, using a convolution
classification method, the proposed type of point clouds reflected by the
middle echoes are identified from all point clouds. The middle-echo point
clouds are distinguished from the first and last echoes. Hence, the crown
positions of the trees are quickly detected from the huge number of point
clouds. Second, to accurately extract trees from all point clouds, we propose a
3D deep learning network, PointNLM, to semantically segment tree crowns.
PointNLM captures the long-range relationship between the point clouds via a
non-local branch and extracts high-level features via max-pooling applied to
unordered points. The whole framework is evaluated using the Semantic 3D
reduced-test set. The IoU of tree point cloud segmentation reached 0.864. In
addition, the semantic segmentation network was tested using the Paris-Lille-3D
dataset. The average IoU outperformed several other popular methods. The
experimental results indicate that the proposed algorithm provides an excellent
solution for vegetation segmentation from LiDAR point clouds.
cs.CV
410341
1302.6001
Conditional G-expectation in $\mathbb{L}^{p}$ and related It\^o's
  calculus
  In this paper, we define a dynamically consistent conditional G-expectation
in space $\mathbb{L}^{p}$, and give the related stochastic calculus of It\^o's
type, especially get It\^o's formula for a general $C^{1,2}$-function.
math.PR
949372
1802.09832
Estimates of Potential functions of random walks on $Z$ with zero mean
  and infinite variance and their applications
  Let $S_n =X_1+\cdots +X_n$ be an irreducible random walk (r.w.) on the one
dimensional integer lattice with zero mean, infinite variance and i.i.d.
increments $X_n$. We obtain an upper and lower bounds of the potential
function, $a(x)$, of $S_n$ in the form $a(x)\asymp x/m(x)$ under a reasonable
condition on the distribution of $X_n$; we especially show that as $x\to\infty$
$$a(x) \asymp \frac{x}{m_-(x)} \quad\mbox{and}\quad \frac{a(-x)}{a(x)} \to 0
\quad\;\;\mbox{if}\quad \lim_{x\to +\infty} \frac{m_+(x)}{m_-(x)} =0,$$ where
$m_\pm(x) = \int_0^xdy\int_y^\infty P[\pm X_1>u]du$ and $m=m_++m_-$. Under
certain conditions on the tails of the distribution of $X$ we derive precise
asymptotic forms of $a(x)$ as $x\to +\infty$ or/and $-\infty$. The results are
applied to derive a sufficient condition for the relative stability of the
ladder height and estimates of some escape probabilities from the origin; we
show among others that under the above condition on $m_+/m-$, $P[S_n>0] \to
1/\alpha$ if and only if the probability of exiting a long interval $[-Q,R]$
through the upper boundary converges to $\lambda^{\alpha-1}$ as $Q/(Q+R) \to
\lambda$ for any $0<\lambda<1$.
math.PR
1937151
2310.14894
Local Universal Rule-based Explanations
  Explainable artificial intelligence (XAI) is one of the most intensively
developed are of AI in recent years. It is also one of the most fragmented one
with multiple methods that focus on different aspects of explanations. This
makes difficult to obtain the full spectrum of explanation at once in a compact
and consistent way. To address this issue, we present Local Universal Explainer
(LUX) that is a rule-based explainer which can generate factual, counterfactual
and visual explanations. It is based on a modified version of decision tree
algorithms that allows for oblique splits and integration with feature
importance XAI methods such as SHAP or LIME. It does not use data generation in
opposite to other algorithms, but is focused on selecting local concepts in a
form of high-density clusters of real data that have the highest impact on
forming the decision boundary of the explained model. We tested our method on
real and synthetic datasets and compared it with state-of-the-art rule-based
explainers such as LORE, EXPLAN and Anchor. Our method outperforms currently
existing approaches in terms of simplicity, global fidelity and
representativeness.
cs.AI cs.LG
1212915
1912.01019
Canonical analysis of $n$-dimensional Palatini action without
  second-class constraints
  We carry out the canonical analysis of the $n$-dimensional Palatini action
with or without a cosmological constant $(n\geq3)$ introducing neither
second-class constraints nor resorting to any gauge fixing. This is
accomplished by providing an expression for the spatial components of the
connection that allows us to isolate the nondynamical variables present among
them, which can later be eliminated from the action by using their own equation
of motion. As a result, we obtain the description of the phase space of general
relativity in terms of manifestly $SO(n-1,1)$ [or $SO(n)$] covariant variables
subject to first-class constraints only, with no second-class constraints
arising during the process. Afterwards, we perform, at the covariant level, a
canonical transformation to a set of variables in terms of which the above
constraints take a simpler form. Finally, we impose the time gauge and make
contact with the $SO(n-1)$ ADM formalism.
gr-qc hep-th math-ph math.MP
1590996
2201.05624
Scientific Machine Learning through Physics-Informed Neural Networks:
  Where we are and What's next
  Physics-Informed Neural Networks (PINN) are neural networks (NNs) that encode
model equations, like Partial Differential Equations (PDE), as a component of
the neural network itself. PINNs are nowadays used to solve PDEs, fractional
equations, integral-differential equations, and stochastic PDEs. This novel
methodology has arisen as a multi-task learning framework in which a NN must
fit observed data while reducing a PDE residual. This article provides a
comprehensive review of the literature on PINNs: while the primary goal of the
study was to characterize these networks and their related advantages and
disadvantages. The review also attempts to incorporate publications on a
broader range of collocation-based physics informed neural networks, which
stars form the vanilla PINN, as well as many other variants, such as
physics-constrained neural networks (PCNN), variational hp-VPINN, and
conservative PINN (CPINN). The study indicates that most research has focused
on customizing the PINN through different activation functions, gradient
optimization techniques, neural network structures, and loss function
structures. Despite the wide range of applications for which PINNs have been
used, by demonstrating their ability to be more feasible in some contexts than
classical numerical techniques like Finite Element Method (FEM), advancements
are still possible, most notably theoretical issues that remain unresolved.
cs.LG cs.AI cs.NA math.NA physics.data-an
Unnamed: 0
id
title
abstract
categories
1
2
3
4
5
6
7
List<File<(table)>>