{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting Started with GEDI L1B Data in Python\n",
"### This tutorial demonstrates how to work with the Geolocated Waveform ([GEDI01_B.001](https://doi.org/10.5067/GEDI/GEDI01_B.001)) data product.\n",
"The Global Ecosystem Dynamics Investigation ([GEDI](https://lpdaac.usgs.gov/data/get-started-data/collection-overview/missions/gedi-overview/)) mission aims to characterize ecosystem structure and dynamics to enable radically improved quantification and understanding of the Earth's carbon cycle and biodiversity. The GEDI instrument produces high resolution laser ranging observations of the 3-dimensional structure of the Earth. GEDI is attached to the International Space Station and collects data globally between 51.6$^{o}$ N and 51.6$^{o}$ S latitudes at the highest resolution and densest sampling of any light detection and ranging (lidar) instrument in orbit to date. The Land Processes Distributed Active Archive Center (LP DAAC) distributes the GEDI Level 1 and Level 2 products. The L1B and L2 GEDI products are archived and distributed in the HDF-EOS5 file format. \n",
"\n",
"---\n",
"## Use Case Example: \n",
"This tutorial was developed using an example use case for a project being completed by the National Parks Service. **The goal of the project is to use GEDI L1B data to observe GEDI waveforms over Redwood National Park in northern California.** \n",
"\n",
"This tutorial will show how to use Python to open GEDI L1B files, visualize the full orbit of GEDI points (shots), subset to a region of interest, visualize GEDI full waveforms, and export subsets of GEDI science dataset (SDS) layers as GeoJSON files that can be loaded into GIS and/or Remote Sensing software programs. \n",
"*** \n",
"### Data Used in the Example: \n",
"- **GEDI L1B Geolocated Waveform Data Global Footprint Level - [GEDI01_B.001](https://doi.org/10.5067/GEDI/GEDI01_B.001)**\n",
" - _The purpose of the L1B dataset is to provide geolocated waveforms and supporting datasets for each laser shot for all eight GEDI beams. This includes corrected and smoothed waveforms, geolocation parameters, and geophysical corrections._ \n",
" - **Science Dataset (SDS) layers:** \n",
" - /geolocation/latitude_bin0 \n",
" - /geolocation/longitude_bin0 \n",
" - /shot_number \n",
" - /stale_return_flag \n",
" - /geolocation/degrade \n",
" - /rx_sample_count \n",
" - /rx_sample_start_index \n",
" - /rxwaveform \n",
" - /geolocation/elevation_bin0 \n",
" - /geolocation/elevation_lastbin \n",
"\n",
"*** \n",
"# Topics Covered:\n",
"1. [**Get Started**](#getstarted) \n",
" 1.1 Import Packages \n",
" 1.2 Set Up the Working Environment and Retrieve Files \n",
"2. [**Import and Interpret Data**](#importinterpret) \n",
" 2.1 Open a GEDI HDF5 File and Read File Metadata \n",
" 2.2 Read SDS Metadata and Subset by Beam \n",
"3. [**Visualize a GEDI Orbit**](#visualizeorbit) \n",
" 3.1 Subset by Layer and Create a Geodataframe \n",
" 3.2 Visualize a Geodataframe\n",
"4. [**Subset and Visualize Waveforms**](#subsetviswaveforms) \n",
" 4.1 Import and Extract Waveforms \n",
" 4.2 Visualize Waveforms \n",
"5. [**Plot Profile Transects**](#plottransects) \n",
" 5.1 Plot Waveform Transects \n",
"6. [**Export Subsets as GeoJSON Files**](#exportgeojson) \n",
"***\n",
"# Before Starting this Tutorial:\n",
"## Setup and Dependencies \n",
"It is recommended to use [Conda](https://conda.io/docs/), an environment manager to set up a compatible Python environment. Download Conda for your OS here: https://www.anaconda.com/download/. Once you have Conda installed, Follow the instructions below to successfully setup a Python environment on Linux, MacOS, or Windows.\n",
"\n",
"This Python Jupyter Notebook tutorial has been tested using Python version 3.7. Conda was used to create the python environment. \n",
"\n",
" - Using your preferred command line interface (command prompt, terminal, cmder, etc.) type the following to successfully create a compatible python environment:\n",
" > `conda create -n geditutorial -c conda-forge --yes python=3.7 h5py shapely geopandas pandas geoviews holoviews` \n",
" \n",
" > `conda activate geditutorial` \n",
" \n",
" > `jupyter notebook` \n",
"\n",
"If you do not have jupyter notebook installed, you may need to run: \n",
" > `conda install jupyter notebook` \n",
"\n",
"#### Having trouble getting a compatible Python environment set up? Contact LP DAAC User Services at: https://lpdaac.usgs.gov/lpdaac-contact-us/\n",
"\n",
"If you prefer to not install Conda, the same setup and dependencies can be achieved by using another package manager such as `pip`. \n",
"***\n",
"## Example Data:\n",
"This tutorial uses the GEDI L1B observation from June 19, 2019 (orbit 02932). Use the link below to download the file directly from the LP DAAC Data Pool: \n",
" - https://e4ftl01.cr.usgs.gov/GEDI/GEDI01_B.001/2019.06.19/GEDI01_B_2019170155833_O02932_T02267_02_003_01.h5 (7.87 GB) \n",
"\n",
"#### A [NASA Earthdata Login](https://urs.earthdata.nasa.gov/) account is required to download the data used in this tutorial. You can create an account at the link provided. \n",
"### You will need to have the file above downloaded into the same directory as this Jupyter Notebook in order to successfully run the code below.\n",
"\n",
"## Source Code used to Generate this Tutorial:\n",
"The repository containing all of the required files is located at: https://git.earthdata.nasa.gov/projects/LPDUR/repos/gedi-tutorial/browse \n",
"- [Jupyter Notebook](https://git.earthdata.nasa.gov/projects/LPDUR/repos/gedi-tutorial/browse/GEDI_L1B_Tutorial.ipynb) \n",
"- [Redwood National Park GeoJSON](https://git.earthdata.nasa.gov/projects/LPDUR/repos/gedi-tutorial/browse/RedwoodNP.geojson) \n",
" - Contains the administrative boundary for Redwood National Park, available from: [Administrative Boundaries of National Park System Units 12/31/2017 - National Geospatial Data Asset (NGDA) NPS National Parks Dataset](https://irma.nps.gov/DataStore/DownloadFile/594958)\n",
"\n",
"
\n",
"NOTE: This tutorial was developed for GEDI L1B HDF-EOS5 files and should only be used for that product.
\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"# 1. Get Started "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1.1 Import Packages \n",
"#### Import the required packages and set the input/working directory to run this Jupyter Notebook locally."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"data": {
"application/javascript": [
"\n",
"(function(root) {\n",
" function now() {\n",
" return new Date();\n",
" }\n",
"\n",
" var force = true;\n",
"\n",
" if (typeof root._bokeh_onload_callbacks === \"undefined\" || force === true) {\n",
" root._bokeh_onload_callbacks = [];\n",
" root._bokeh_is_loading = undefined;\n",
" }\n",
"\n",
" var JS_MIME_TYPE = 'application/javascript';\n",
" var HTML_MIME_TYPE = 'text/html';\n",
" var EXEC_MIME_TYPE = 'application/vnd.bokehjs_exec.v0+json';\n",
" var CLASS_NAME = 'output_bokeh rendered_html';\n",
"\n",
" /**\n",
" * Render data to the DOM node\n",
" */\n",
" function render(props, node) {\n",
" var script = document.createElement(\"script\");\n",
" node.appendChild(script);\n",
" }\n",
"\n",
" /**\n",
" * Handle when an output is cleared or removed\n",
" */\n",
" function handleClearOutput(event, handle) {\n",
" var cell = handle.cell;\n",
"\n",
" var id = cell.output_area._bokeh_element_id;\n",
" var server_id = cell.output_area._bokeh_server_id;\n",
" // Clean up Bokeh references\n",
" if (id != null && id in Bokeh.index) {\n",
" Bokeh.index[id].model.document.clear();\n",
" delete Bokeh.index[id];\n",
" }\n",
"\n",
" if (server_id !== undefined) {\n",
" // Clean up Bokeh references\n",
" var cmd = \"from bokeh.io.state import curstate; print(curstate().uuid_to_server['\" + server_id + \"'].get_sessions()[0].document.roots[0]._id)\";\n",
" cell.notebook.kernel.execute(cmd, {\n",
" iopub: {\n",
" output: function(msg) {\n",
" var id = msg.content.text.trim();\n",
" if (id in Bokeh.index) {\n",
" Bokeh.index[id].model.document.clear();\n",
" delete Bokeh.index[id];\n",
" }\n",
" }\n",
" }\n",
" });\n",
" // Destroy server and session\n",
" var cmd = \"import bokeh.io.notebook as ion; ion.destroy_server('\" + server_id + \"')\";\n",
" cell.notebook.kernel.execute(cmd);\n",
" }\n",
" }\n",
"\n",
" /**\n",
" * Handle when a new output is added\n",
" */\n",
" function handleAddOutput(event, handle) {\n",
" var output_area = handle.output_area;\n",
" var output = handle.output;\n",
"\n",
" // limit handleAddOutput to display_data with EXEC_MIME_TYPE content only\n",
" if ((output.output_type != \"display_data\") || (!output.data.hasOwnProperty(EXEC_MIME_TYPE))) {\n",
" return\n",
" }\n",
"\n",
" var toinsert = output_area.element.find(\".\" + CLASS_NAME.split(' ')[0]);\n",
"\n",
" if (output.metadata[EXEC_MIME_TYPE][\"id\"] !== undefined) {\n",
" toinsert[toinsert.length - 1].firstChild.textContent = output.data[JS_MIME_TYPE];\n",
" // store reference to embed id on output_area\n",
" output_area._bokeh_element_id = output.metadata[EXEC_MIME_TYPE][\"id\"];\n",
" }\n",
" if (output.metadata[EXEC_MIME_TYPE][\"server_id\"] !== undefined) {\n",
" var bk_div = document.createElement(\"div\");\n",
" bk_div.innerHTML = output.data[HTML_MIME_TYPE];\n",
" var script_attrs = bk_div.children[0].attributes;\n",
" for (var i = 0; i < script_attrs.length; i++) {\n",
" toinsert[toinsert.length - 1].firstChild.setAttribute(script_attrs[i].name, script_attrs[i].value);\n",
" }\n",
" // store reference to server id on output_area\n",
" output_area._bokeh_server_id = output.metadata[EXEC_MIME_TYPE][\"server_id\"];\n",
" }\n",
" }\n",
"\n",
" function register_renderer(events, OutputArea) {\n",
"\n",
" function append_mime(data, metadata, element) {\n",
" // create a DOM node to render to\n",
" var toinsert = this.create_output_subarea(\n",
" metadata,\n",
" CLASS_NAME,\n",
" EXEC_MIME_TYPE\n",
" );\n",
" this.keyboard_manager.register_events(toinsert);\n",
" // Render to node\n",
" var props = {data: data, metadata: metadata[EXEC_MIME_TYPE]};\n",
" render(props, toinsert[toinsert.length - 1]);\n",
" element.append(toinsert);\n",
" return toinsert\n",
" }\n",
"\n",
" /* Handle when an output is cleared or removed */\n",
" events.on('clear_output.CodeCell', handleClearOutput);\n",
" events.on('delete.Cell', handleClearOutput);\n",
"\n",
" /* Handle when a new output is added */\n",
" events.on('output_added.OutputArea', handleAddOutput);\n",
"\n",
" /**\n",
" * Register the mime type and append_mime function with output_area\n",
" */\n",
" OutputArea.prototype.register_mime_type(EXEC_MIME_TYPE, append_mime, {\n",
" /* Is output safe? */\n",
" safe: true,\n",
" /* Index of renderer in `output_area.display_order` */\n",
" index: 0\n",
" });\n",
" }\n",
"\n",
" // register the mime type if in Jupyter Notebook environment and previously unregistered\n",
" if (root.Jupyter !== undefined) {\n",
" var events = require('base/js/events');\n",
" var OutputArea = require('notebook/js/outputarea').OutputArea;\n",
"\n",
" if (OutputArea.prototype.mime_types().indexOf(EXEC_MIME_TYPE) == -1) {\n",
" register_renderer(events, OutputArea);\n",
" }\n",
" }\n",
"\n",
" \n",
" if (typeof (root._bokeh_timeout) === \"undefined\" || force === true) {\n",
" root._bokeh_timeout = Date.now() + 5000;\n",
" root._bokeh_failed_load = false;\n",
" }\n",
"\n",
" var NB_LOAD_WARNING = {'data': {'text/html':\n",
" \"
\\n\"+\n",
" \"
\\n\"+\n",
" \"BokehJS does not appear to have successfully loaded. If loading BokehJS from CDN, this \\n\"+\n",
" \"may be due to a slow or bad network connection. Possible fixes:\\n\"+\n",
" \"
\\n\"+\n",
" \"
\\n\"+\n",
" \"
re-rerun `output_notebook()` to attempt to load from CDN again, or
\"}};\n",
"\n",
" function display_loaded() {\n",
" var el = document.getElementById(null);\n",
" if (el != null) {\n",
" el.textContent = \"BokehJS is loading...\";\n",
" }\n",
" if (root.Bokeh !== undefined) {\n",
" if (el != null) {\n",
" el.textContent = \"BokehJS \" + root.Bokeh.version + \" successfully loaded.\";\n",
" }\n",
" } else if (Date.now() < root._bokeh_timeout) {\n",
" setTimeout(display_loaded, 100)\n",
" }\n",
" }\n",
"\n",
"\n",
" function run_callbacks() {\n",
" try {\n",
" root._bokeh_onload_callbacks.forEach(function(callback) {\n",
" if (callback != null)\n",
" callback();\n",
" });\n",
" } finally {\n",
" delete root._bokeh_onload_callbacks\n",
" }\n",
" console.debug(\"Bokeh: all callbacks have finished\");\n",
" }\n",
"\n",
" function load_libs(css_urls, js_urls, callback) {\n",
" if (css_urls == null) css_urls = [];\n",
" if (js_urls == null) js_urls = [];\n",
"\n",
" root._bokeh_onload_callbacks.push(callback);\n",
" if (root._bokeh_is_loading > 0) {\n",
" console.debug(\"Bokeh: BokehJS is being loaded, scheduling callback at\", now());\n",
" return null;\n",
" }\n",
" if (js_urls == null || js_urls.length === 0) {\n",
" run_callbacks();\n",
" return null;\n",
" }\n",
" console.debug(\"Bokeh: BokehJS not loaded, scheduling load and callback at\", now());\n",
" root._bokeh_is_loading = css_urls.length + js_urls.length;\n",
"\n",
" function on_load() {\n",
" root._bokeh_is_loading--;\n",
" if (root._bokeh_is_loading === 0) {\n",
" console.debug(\"Bokeh: all BokehJS libraries/stylesheets loaded\");\n",
" run_callbacks()\n",
" }\n",
" }\n",
"\n",
" function on_error() {\n",
" console.error(\"failed to load \" + url);\n",
" }\n",
"\n",
" for (var i = 0; i < css_urls.length; i++) {\n",
" var url = css_urls[i];\n",
" const element = document.createElement(\"link\");\n",
" element.onload = on_load;\n",
" element.onerror = on_error;\n",
" element.rel = \"stylesheet\";\n",
" element.type = \"text/css\";\n",
" element.href = url;\n",
" console.debug(\"Bokeh: injecting link tag for BokehJS stylesheet: \", url);\n",
" document.body.appendChild(element);\n",
" }\n",
"\n",
" for (var i = 0; i < js_urls.length; i++) {\n",
" var url = js_urls[i];\n",
" var element = document.createElement('script');\n",
" element.onload = on_load;\n",
" element.onerror = on_error;\n",
" element.async = false;\n",
" element.src = url;\n",
" console.debug(\"Bokeh: injecting script tag for BokehJS library: \", url);\n",
" document.head.appendChild(element);\n",
" }\n",
" };\n",
"\n",
" function inject_raw_css(css) {\n",
" const element = document.createElement(\"style\");\n",
" element.appendChild(document.createTextNode(css));\n",
" document.body.appendChild(element);\n",
" }\n",
"\n",
" \n",
" var js_urls = [];\n",
" var css_urls = [];\n",
" \n",
"\n",
" var inline_js = [\n",
" function(Bokeh) {\n",
" /* BEGIN bokeh.min.js */\n",
" /*!\n",
" * Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors\n",
" * All rights reserved.\n",
" * \n",
" * Redistribution and use in source and binary forms, with or without modification,\n",
" * are permitted provided that the following conditions are met:\n",
" * \n",
" * Redistributions of source code must retain the above copyright notice,\n",
" * this list of conditions and the following disclaimer.\n",
" * \n",
" * Redistributions in binary form must reproduce the above copyright notice,\n",
" * this list of conditions and the following disclaimer in the documentation\n",
" * and/or other materials provided with the distribution.\n",
" * \n",
" * Neither the name of Anaconda nor the names of any contributors\n",
" * may be used to endorse or promote products derived from this software\n",
" * without specific prior written permission.\n",
" * \n",
" * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n",
" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n",
" * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n",
" * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE\n",
" * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n",
" * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n",
" * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n",
" * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n",
" * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n",
" * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF\n",
" * THE POSSIBILITY OF SUCH DAMAGE.\n",
" */\n",
" (function(root, factory) {\n",
" root[\"Bokeh\"] = factory();\n",
" })(this, function() {\n",
" var define;\n",
" var parent_require = typeof require === \"function\" && require\n",
" return (function(modules, entry, aliases, externals) {\n",
" if (aliases === undefined) aliases = {};\n",
" if (externals === undefined) externals = {};\n",
"\n",
" var cache = {};\n",
"\n",
" var normalize = function(name) {\n",
" if (typeof name === \"number\")\n",
" return name;\n",
"\n",
" if (name === \"bokehjs\")\n",
" return entry;\n",
"\n",
" var prefix = \"@bokehjs/\"\n",
" if (name.slice(0, prefix.length) === prefix)\n",
" name = name.slice(prefix.length)\n",
"\n",
" var alias = aliases[name]\n",
" if (alias != null)\n",
" return alias;\n",
"\n",
" var trailing = name.length > 0 && name[name.lenght-1] === \"/\";\n",
" var index = aliases[name + (trailing ? \"\" : \"/\") + \"index\"];\n",
" if (index != null)\n",
" return index;\n",
"\n",
" return name;\n",
" }\n",
"\n",
" var require = function(name) {\n",
" var mod = cache[name];\n",
" if (!mod) {\n",
" var id = normalize(name);\n",
"\n",
" mod = cache[id];\n",
" if (!mod) {\n",
" if (!modules[id]) {\n",
" if (parent_require && externals[id]) {\n",
" try {\n",
" mod = {exports: parent_require(id)};\n",
" cache[id] = cache[name] = mod;\n",
" return mod.exports;\n",
" } catch (e) {}\n",
" }\n",
"\n",
" var err = new Error(\"Cannot find module '\" + name + \"'\");\n",
" err.code = 'MODULE_NOT_FOUND';\n",
" throw err;\n",
" }\n",
"\n",
" mod = {exports: {}};\n",
" cache[id] = cache[name] = mod;\n",
" modules[id].call(mod.exports, require, mod, mod.exports);\n",
" } else\n",
" cache[name] = mod;\n",
" }\n",
"\n",
" return mod.exports;\n",
" }\n",
"\n",
" var main = require(entry);\n",
" main.require = require;\n",
"\n",
" main.register_plugin = function(plugin_modules, plugin_entry, plugin_aliases, plugin_externals) {\n",
" if (plugin_aliases === undefined) plugin_aliases = {};\n",
" if (plugin_externals === undefined) plugin_externals = {};\n",
"\n",
" for (var name in plugin_modules) {\n",
" modules[name] = plugin_modules[name];\n",
" }\n",
"\n",
" for (var name in plugin_aliases) {\n",
" aliases[name] = plugin_aliases[name];\n",
" }\n",
"\n",
" for (var name in plugin_externals) {\n",
" externals[name] = plugin_externals[name];\n",
" }\n",
"\n",
" var plugin = require(plugin_entry);\n",
"\n",
" for (var name in plugin) {\n",
" main[name] = plugin[name];\n",
" }\n",
"\n",
" return plugin;\n",
" }\n",
"\n",
" return main;\n",
" })\n",
" ([\n",
" function _(n,o,r){n(1),function(n){for(var o in n)r.hasOwnProperty(o)||(r[o]=n[o])}(n(102))},\n",
" function _(n,c,f){n(2),n(11),n(14),n(21),n(49),n(52),n(87),n(94),n(100)},\n",
" function _(e,n,a){e(3)()||Object.defineProperty(Object,\"assign\",{value:e(4),configurable:!0,enumerable:!1,writable:!0})},\n",
" function _(r,t,o){t.exports=function(){var r,t=Object.assign;return\"function\"==typeof t&&(t(r={foo:\"raz\"},{bar:\"dwa\"},{trzy:\"trzy\"}),r.foo+r.bar+r.trzy===\"razdwatrzy\")}},\n",
" function _(t,r,n){var o=t(5),c=t(10),a=Math.max;r.exports=function(t,r){var n,f,h,i=a(arguments.length,2);for(t=Object(c(t)),h=function(o){try{t[o]=r[o]}catch(t){n||(n=t)}},f=1;f= 0\");if(!isFinite(r))throw new RangeError(\"Count must be < ∞\");for(n=\"\";r;)r%2&&(n+=t),r>1&&(t+=t),r>>=1;return n}},\n",
" function _(t,i,n){var r=t(18),a=Math.abs,o=Math.floor;i.exports=function(t){return isNaN(t)?0:0!==(t=Number(t))&&isFinite(t)?r(t)*o(a(t)):t}},\n",
" function _(n,t,i){t.exports=n(19)()?Math.sign:n(20)},\n",
" function _(n,t,o){t.exports=function(){var n=Math.sign;return\"function\"==typeof n&&(1===n(10)&&-1===n(-20))}},\n",
" function _(n,r,t){r.exports=function(n){return n=Number(n),isNaN(n)||0===n?n:n>0?1:-1}},\n",
" function _(e,r,a){e(22)()||Object.defineProperty(Array,\"from\",{value:e(23),configurable:!0,enumerable:!1,writable:!0})},\n",
" function _(n,o,r){o.exports=function(){var n,o,r=Array.from;return\"function\"==typeof r&&(o=r(n=[\"raz\",\"dwa\"]),Boolean(o&&o!==n&&\"dwa\"===o[1]))}},\n",
" function _(e,l,r){var n=e(24).iterator,t=e(44),a=e(45),i=e(46),u=e(47),o=e(10),f=e(8),c=e(48),v=Array.isArray,h=Function.prototype.call,y={configurable:!0,enumerable:!0,writable:!0,value:null},s=Object.defineProperty;l.exports=function(e){var l,r,A,g,p,w,b,d,x,j,O=arguments[1],m=arguments[2];if(e=Object(o(e)),f(O)&&u(O),this&&this!==Array&&a(this))l=this;else{if(!O){if(t(e))return 1!==(p=e.length)?Array.apply(null,e):((g=new Array(1))[0]=e[0],g);if(v(e)){for(g=new Array(p=e.length),r=0;r
\"],_default:[0,\"\",\"\"]};function ve(e,t){var n;return n=void 0!==e.getElementsByTagName?e.getElementsByTagName(t||\"*\"):void 0!==e.querySelectorAll?e.querySelectorAll(t||\"*\"):[],void 0===t||t&&N(e,t)?b.merge([e],n):n}function ye(e,t){for(var n=0,r=e.length;n-1)i&&i.push(o);else if(l=ie(o),a=ve(f.appendChild(o),\"script\"),l&&ye(a),n)for(c=0;o=a[c++];)he.test(o.type||\"\")&&n.push(o);return f}me=r.createDocumentFragment().appendChild(r.createElement(\"div\")),(xe=r.createElement(\"input\")).setAttribute(\"type\",\"radio\"),xe.setAttribute(\"checked\",\"checked\"),xe.setAttribute(\"name\",\"t\"),me.appendChild(xe),h.checkClone=me.cloneNode(!0).cloneNode(!0).lastChild.checked,me.innerHTML=\"\",h.noCloneChecked=!!me.cloneNode(!0).lastChild.defaultValue;var Te=/^key/,Ce=/^(?:mouse|pointer|contextmenu|drag|drop)|click/,Ee=/^([^.]*)(?:\\.(.+)|)/;function ke(){return!0}function Se(){return!1}function Ne(e,t){return e===function(){try{return r.activeElement}catch(e){}}()==(\"focus\"===t)}function Ae(e,t,n,r,i,o){var a,s;if(\"object\"==typeof t){for(s in\"string\"!=typeof n&&(r=r||n,n=void 0),t)Ae(e,s,n,r,t[s],o);return e}if(null==r&&null==i?(i=n,r=n=void 0):null==i&&(\"string\"==typeof n?(i=r,r=void 0):(i=r,r=n,n=void 0)),!1===i)i=Se;else if(!i)return e;return 1===o&&(a=i,(i=function(e){return b().off(e),a.apply(this,arguments)}).guid=a.guid||(a.guid=b.guid++)),e.each(function(){b.event.add(this,t,i,r,n)})}function De(e,t,n){n?(Y.set(e,t,!1),b.event.add(e,t,{namespace:!1,handler:function(e){var r,i,a=Y.get(this,t);if(1&e.isTrigger&&this[t]){if(a.length)(b.event.special[t]||{}).delegateType&&e.stopPropagation();else if(a=o.call(arguments),Y.set(this,t,a),r=n(this,t),this[t](),a!==(i=Y.get(this,t))||r?Y.set(this,t,!1):i={},a!==i)return e.stopImmediatePropagation(),e.preventDefault(),i.value}else a.length&&(Y.set(this,t,{value:b.event.trigger(b.extend(a[0],b.Event.prototype),a.slice(1),this)}),e.stopImmediatePropagation())}})):void 0===Y.get(e,t)&&b.event.add(e,t,ke)}b.event={global:{},add:function(e,t,n,r,i){var o,a,s,u,l,c,f,p,d,h,g,v=Y.get(e);if(v)for(n.handler&&(n=(o=n).handler,i=o.selector),i&&b.find.matchesSelector(re,i),n.guid||(n.guid=b.guid++),(u=v.events)||(u=v.events={}),(a=v.handle)||(a=v.handle=function(t){return void 0!==b&&b.event.triggered!==t.type?b.event.dispatch.apply(e,arguments):void 0}),l=(t=(t||\"\").match(P)||[\"\"]).length;l--;)d=g=(s=Ee.exec(t[l])||[])[1],h=(s[2]||\"\").split(\".\").sort(),d&&(f=b.event.special[d]||{},d=(i?f.delegateType:f.bindType)||d,f=b.event.special[d]||{},c=b.extend({type:d,origType:g,data:r,handler:n,guid:n.guid,selector:i,needsContext:i&&b.expr.match.needsContext.test(i),namespace:h.join(\".\")},o),(p=u[d])||((p=u[d]=[]).delegateCount=0,f.setup&&!1!==f.setup.call(e,r,h,a)||e.addEventListener&&e.addEventListener(d,a)),f.add&&(f.add.call(e,c),c.handler.guid||(c.handler.guid=n.guid)),i?p.splice(p.delegateCount++,0,c):p.push(c),b.event.global[d]=!0)},remove:function(e,t,n,r,i){var o,a,s,u,l,c,f,p,d,h,g,v=Y.hasData(e)&&Y.get(e);if(v&&(u=v.events)){for(l=(t=(t||\"\").match(P)||[\"\"]).length;l--;)if(d=g=(s=Ee.exec(t[l])||[])[1],h=(s[2]||\"\").split(\".\").sort(),d){for(f=b.event.special[d]||{},p=u[d=(r?f.delegateType:f.bindType)||d]||[],s=s[2]&&new RegExp(\"(^|\\\\.)\"+h.join(\"\\\\.(?:.*\\\\.|)\")+\"(\\\\.|$)\"),a=o=p.length;o--;)c=p[o],!i&&g!==c.origType||n&&n.guid!==c.guid||s&&!s.test(c.namespace)||r&&r!==c.selector&&(\"**\"!==r||!c.selector)||(p.splice(o,1),c.selector&&p.delegateCount--,f.remove&&f.remove.call(e,c));a&&!p.length&&(f.teardown&&!1!==f.teardown.call(e,h,v.handle)||b.removeEvent(e,d,v.handle),delete u[d])}else for(d in u)b.event.remove(e,d+t[l],n,r,!0);b.isEmptyObject(u)&&Y.remove(e,\"handle events\")}},dispatch:function(e){var t,n,r,i,o,a,s=b.event.fix(e),u=new Array(arguments.length),l=(Y.get(this,\"events\")||{})[s.type]||[],c=b.event.special[s.type]||{};for(u[0]=s,t=1;t=1))for(;l!==this;l=l.parentNode||this)if(1===l.nodeType&&(\"click\"!==e.type||!0!==l.disabled)){for(o=[],a={},n=0;n-1:b.find(i,this,null,[l]).length),a[i]&&o.push(r);o.length&&s.push({elem:l,handlers:o})}return l=this,u\\x20\\t\\r\\n\\f]*)[^>]*)\\/>/gi,qe=/"
],
"text/plain": [
":Overlay\n",
" .Polygons.I :Polygons [Longitude,Latitude]\n",
" .WMTS.I :WMTS [Longitude,Latitude]\n",
" .Points.I :Points [Longitude,Latitude] (Beam,Shot Number,Stale Return Flag,Degrade)"
]
},
"execution_count": 25,
"metadata": {
"application/vnd.holoviews_exec.v0+json": {
"id": "1022"
}
},
"output_type": "execute_result"
}
],
"source": [
"# Call the function for plotting the GEDI points\n",
"gv.Polygons(redwoodNP).opts(line_color='red', color=None) * pointVisual(latslons, vdims = vdims)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Above is a good illustration of the full GEDI orbit (GEDI files are stored as one ISS orbit). One of the benefits of using geoviews is the interactive nature of the output plots. Use the tools to the right of the map above to zoom in and find the shots intersecting Redwood National Park. \n",
"> (**HINT**: find where the orbit intersects the west coast of the United States)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Below is a screenshot of the region of interest:\n",
" plotted over Redwood National Park, USA.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Side Note: Wondering what the 0's and 1's for `stale_return_flag` and `degrade` mean?"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"stale_return_flag: Indicates that a \"stale\" cue point from the coarse search algorithm is being used.\n",
"degrade: Greater than zero if the shot occurs during a degrade period, zero otherwise.\n"
]
}
],
"source": [
"print(f\"stale_return_flag: {gediL1B[b]['stale_return_flag'].attrs['description']}\")\n",
"print(f\"degrade: {gediL1B[b]['geolocation']['degrade'].attrs['description']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### We will show an example of how to quality filter GEDI data in section 5.\n",
"#### After finding one of the shots within Redwood NP, find the index for that shot number so that we can find the correct waveform to visualize in Section 4. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Each GEDI shot has a unique shot identifier (shot number) that is available within each data group of the product. The shot number is important to retain in any data subsetting as it will allow the user to link any shot record back to the original orbit data, and to link any shot and its data between the L1 and L2 products. The standard format for GEDI Shots is as follows:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Shot: 29320619900465601\n",
"> **2932**: Orbit Number \n",
"**06**: Beam Number \n",
"**199**: Minor frame number (0-241) \n",
"**00465601**: Shot number within orbit "
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"shot = 29320619900465601"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"465600"
]
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"index = np.where(gediL1B[f'{beamNames[0]}/shot_number'][()]==shot)[0][0] # Set the index for the shot identified above\n",
"index"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"del latslons # No longer need the geodataframe used to visualize the full GEDI orbit"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"# 4. Subset and Visualize Waveforms \n",
"#### In this section, learn how to extract and subset specific waveforms and plot them using `holoviews`. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.1 Import and Extract Waveforms"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### In order to find and extract the full waveform for the exact index we are interested in, instead of importing the entire waveform dataset (over 1 billion values!), we will use the `rx_sample_count` and `rx_sample_start_index` to identify the location of the waveform that we are interested in visualizing and just extract those values."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"# From the SDS list, use list comprehension to find sample_count, sample_start_index, and rxwaveform\n",
"sdsCount = gediL1B[[g for g in gediSDS if g.endswith('/rx_sample_count') and beamNames[0] in g][0]]\n",
"sdsStart = gediL1B[[g for g in gediSDS if g.endswith('/rx_sample_start_index') and beamNames[0] in g][0]]\n",
"sdsWaveform = [g for g in gediSDS if g.endswith('/rxwaveform') and beamNames[0] in g][0]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Print the description for each of these datasets to better understand how we will use them to extract specific waveforms."
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"rxwaveform is The corrected receive (RX) waveforms. Use rx_sample_count and rx_sample_start_index to identify the location of each waveform.\n"
]
}
],
"source": [
"print(f\"rxwaveform is {gediL1B[sdsWaveform].attrs['description']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Next, read how to use the rx_sample_count and rx_sample_start_index layers:"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"rx_sample_count is The number of sample intervals (elements) in each RX waveform.\n",
"rx_sample_start_index is The index in the rxwaveform dataset of the first element of each RX waveform. The indices start at 1.\n"
]
}
],
"source": [
"print(f\"rx_sample_count is {sdsCount.attrs['description']}\")\n",
"print(f\"rx_sample_start_index is {sdsStart.attrs['description']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Use `rx_sample_count` and `rx_sample_start_index` to identify the location of each waveform in `rxwaveform`."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"wfCount = sdsCount[index] # Number of samples in the waveform\n",
"wfStart = int(sdsStart[index] - 1) # Subtract one because python array indexing starts at 0 not 1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Next grab additional information about the shot, including the unique `shot_number`, and lat/lon location."
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"wfShot = gediL1B[f'{beamNames[0]}/shot_number'][index]\n",
"wfLat = gediL1B[f'{beamNames[0]}/geolocation/latitude_bin0'][index]\n",
"wfLon = gediL1B[f'{beamNames[0]}/geolocation/longitude_bin0'][index]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Put everything together to identify the waveform we want to extract:"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The waveform located at: 41.28533109208681, -124.0300090357876 (shot ID: 29320619900465601, index 465600) is from beam BEAM0110 and is stored in rxwaveform beginning at index 661152000 and ending at index 661152812\n"
]
}
],
"source": [
"print(f\"The waveform located at: {str(wfLat)}, {str(wfLon)} (shot ID: {wfShot}, index {index}) is from beam {beamNames[0]} \\\n",
" and is stored in rxwaveform beginning at index {wfStart} and ending at index {wfStart + wfCount}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### In order to plot a full waveform, you also need to import the elevation recorded at `bin0` (the first elevation recorded) and `lastbin` or the last elevation recorded for that waveform."
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"# Grab the elevation recorded at the start and end of the full waveform capture\n",
"zStart = gediL1B[f'{beamNames[0]}/geolocation/elevation_bin0'][index] # Height of the start of the rx window\n",
"zEnd = gediL1B[f'{beamNames[0]}/geolocation/elevation_lastbin'][index] # Height of the end of the rx window"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract the full waveform using the index start and count:\n",
"#### Below you can see why it is important to extract the specific waveform that you are interested in: almost 1.4 billion values are stored in the rxwaveform dataset!"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1,391,172,580\n"
]
}
],
"source": [
"print(\"{:,}\".format(gediL1B[sdsWaveform].shape[0]))"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [],
"source": [
"# Retrieve the waveform sds layer using the sample start index and sample count information to slice the correct dimensions\n",
"waveform = gediL1B[sdsWaveform][wfStart: wfStart + wfCount]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4.2 Visualize Waveforms\n",
"#### Below, plot the extracted waveform using the elevation difference from bin0 to last_bin on the y axis, and the waveform energy returned or amplitude (digital number = dn) on the x axis. "
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [],
"source": [
"# Find elevation difference from start to finish and divide into equal intervals based on sample_count\n",
"zStretch = np.add(zEnd, np.multiply(range(wfCount, 0, -1), ((zStart - zEnd) / int(wfCount))))"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"
\n",
""
],
"text/plain": [
":Curve [Amplitude (DN)] (Elevation (m))"
]
},
"execution_count": 41,
"metadata": {
"application/vnd.holoviews_exec.v0+json": {
"id": "1186"
}
},
"output_type": "execute_result"
}
],
"source": [
"hv.Curve(wvDF) # Basic line graph plotting the waveform"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Congratulations! You have plotted your first waveform.\n",
"#### Above is a basic line plot showing amplitude (DN) as a function of elevation from the rxwaveform for the specific shot selected. Next, add additional chart elements and make the graph interactive."
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.holoviews_exec.v0+json": "",
"text/html": [
"
\n",
"\n",
"\n",
"\n",
"\n",
"\n",
" \n",
"
\n",
""
],
"text/plain": [
":Curve [Amplitude (DN)] (Elevation (m))"
]
},
"execution_count": 42,
"metadata": {
"application/vnd.holoviews_exec.v0+json": {
"id": "1295"
}
},
"output_type": "execute_result"
}
],
"source": [
"# Create a holoviews interactive Curve plot with additional parameters defining the plot aesthetics \n",
"wfVis = hv.Curve(wvDF).opts(color='darkgreen', tools=['hover'], height=500, width=400,\n",
" xlim=(np.min(waveform) - 10, np.max(waveform) + 10), ylim=(np.min(zStretch), np.max(zStretch)),\n",
" fontsize={'xticks':10, 'yticks':10,'xlabel':16, 'ylabel': 16, 'title':13}, line_width=2.5, title=f'{str(wfShot)}')\n",
"wfVis"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### As you can see, the selected shot does not look so interesting--this waveform shape looks more characteristic of some type of low canopy/bare ground than a tree canopy. If you zoom in to the GEDI shot it appears to be over the river itself or possibly a sand bar:\n",
"\n",
"#### Next, plot a couple more waveforms and see if you can capture a shot over the forest."
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [],
"source": [
"latlons = {} # Set up a dictionary to hold multiple waveforms\n",
"latlons[wfShot] = Point(wfLon,wfLat) # Create shapely point and add to dictionary\n",
"\n",
"# Retain waveform and quality layers to be exported later\n",
"latlonsWF = [waveform]\n",
"latlonsEL = [zStretch]\n",
"latlonsSRF = [gediL1B[f'{beamNames[0]}/stale_return_flag'][index]]\n",
"latlonsD = [gediL1B[f'{beamNames[0]}/geolocation/degrade'][index]]"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [],
"source": [
"# Define which observation to examine by selecting the three indexes prior to the one used above\n",
"index = 465597"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [],
"source": [
"# Use rx_sample_count and rx_sample_start_index to identify the location of each waveform\n",
"wfCount = sdsCount[index] # Number of samples in the waveform\n",
"wfStart = int(sdsStart[index] - 1) # Subtract 1, index starts at 0 not 1\n",
"wfShot = gediL1B[f'{beamNames[0]}/shot_number'][index] # Unique Shot Number\n",
"wfLat = gediL1B[f'{beamNames[0]}/geolocation/latitude_bin0'][index] # Latitude\n",
"wfLon = gediL1B[f'{beamNames[0]}/geolocation/longitude_bin0'][index] # Longitude\n",
"latlons[wfShot] = Point(wfLon,wfLat) # Create shapely point and add to dictionary\n",
"\n",
"# Grab the elevation recorded at the start and end of the full waveform capture\n",
"zStart = gediL1B[f'{beamNames[0]}/geolocation/elevation_bin0'][index] # Height of the start of the rx window\n",
"zEnd = gediL1B[f'{beamNames[0]}/geolocation/elevation_lastbin'][index] # Height of the end of the rx window\n",
"\n",
"# Retrieve the waveform sds layer using the sample start index and sample count information to slice the correct dimensions\n",
"waveform = gediL1B[sdsWaveform][wfStart: wfStart + wfCount]"
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [],
"source": [
"# Find elevation difference from start to finish, divide into equal intervals\n",
"zStretch = np.add(zEnd, np.multiply(range(wfCount, 0, -1), ((zStart - zEnd) / int(wfCount))))\n",
"\n",
"# match the waveform amplitude values with the elevation and convert to Pandas df\n",
"wvDF = pd.DataFrame({'Amplitude (DN)': waveform, 'Elevation (m)': zStretch})"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [],
"source": [
"# Append waveform and quality layers to be exported later\n",
"latlonsWF.append(waveform)\n",
"latlonsEL.append(zStretch)\n",
"latlonsSRF.append(gediL1B[f'{beamNames[0]}/stale_return_flag'][index])\n",
"latlonsD.append(gediL1B[f'{beamNames[0]}/geolocation/degrade'][index])"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.holoviews_exec.v0+json": "",
"text/html": [
"
\n",
"\n",
"\n",
"\n",
"\n",
"\n",
" \n",
"
\n",
""
],
"text/plain": [
":Curve [Amplitude (DN)] (Elevation (m))"
]
},
"execution_count": 48,
"metadata": {
"application/vnd.holoviews_exec.v0+json": {
"id": "1405"
}
},
"output_type": "execute_result"
}
],
"source": [
"# Create a holoviews interactive Curve plot with additional parameters defining the plot aesthetics \n",
"visL1B1 = hv.Curve(wvDF).opts(color='darkgreen', tools=['hover'], height=500, width=400,\n",
" xlim=(np.min(waveform) - 10, np.max(waveform) + 10), ylim=(np.min(zStretch), np.max(zStretch)),\n",
" fontsize={'xticks':10, 'yticks':10,'xlabel':16, 'ylabel': 16, 'title':13}, line_width=2.5, title=f'{str(wfShot)}')\n",
"visL1B1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Now that is starting to look more like a very tall, multi-layered tree canopy! Continue with the next shot:"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [],
"source": [
"# Define which observation to examine by selecting the three indexes prior to the one used above\n",
"index = 465598"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {},
"outputs": [],
"source": [
"# Use rx_sample_count and rx_sample_start_index to identify the location of each waveform\n",
"wfCount = sdsCount[index] # Number of samples in the waveform\n",
"wfStart = int(sdsStart[index] - 1) # Subtract 1, index starts at 0 not 1\n",
"wfShot = gediL1B[f'{beamNames[0]}/shot_number'][index] # Unique Shot Number\n",
"wfLat = gediL1B[f'{beamNames[0]}/geolocation/latitude_bin0'][index] # Latitude\n",
"wfLon = gediL1B[f'{beamNames[0]}/geolocation/longitude_bin0'][index] # Longitude\n",
"latlons[wfShot] = Point(wfLon,wfLat) # Create shapely point and add to dictionary\n",
"\n",
"# Grab the elevation recorded at the start and end of the full waveform capture\n",
"zStart = gediL1B[f'{beamNames[0]}/geolocation/elevation_bin0'][index] # Height of the start of the rx window\n",
"zEnd = gediL1B[f'{beamNames[0]}/geolocation/elevation_lastbin'][index] # Height of the end of the rx window\n",
"\n",
"# Retrieve the waveform sds layer using the sample start index and sample count information to slice the correct dimensions\n",
"waveform = gediL1B[sdsWaveform][wfStart: wfStart + wfCount]"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [],
"source": [
"# Find elevation difference from start to finish, divide into equal intervals\n",
"zStretch = np.add(zEnd, np.multiply(range(wfCount, 0, -1), ((zStart - zEnd) / int(wfCount))))\n",
"\n",
"# match the waveform amplitude values with the elevation and convert to Pandas df\n",
"wvDF = pd.DataFrame({'Amplitude (DN)': waveform, 'Elevation (m)': zStretch})"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {},
"outputs": [],
"source": [
"pd.DataFrame.to_csv(wvDF, 'waveform.csv')"
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {},
"outputs": [],
"source": [
"# Append waveform and quality layers to be exported later\n",
"latlonsWF.append(waveform)\n",
"latlonsEL.append(zStretch)\n",
"latlonsSRF.append(gediL1B[f'{beamNames[0]}/stale_return_flag'][index])\n",
"latlonsD.append(gediL1B[f'{beamNames[0]}/geolocation/degrade'][index])"
]
},
{
"cell_type": "code",
"execution_count": 54,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.holoviews_exec.v0+json": "",
"text/html": [
"
\n",
"\n",
"\n",
"\n",
"\n",
"\n",
" \n",
"
\n",
""
],
"text/plain": [
":Curve [Amplitude (DN)] (Elevation (m))"
]
},
"execution_count": 54,
"metadata": {
"application/vnd.holoviews_exec.v0+json": {
"id": "1515"
}
},
"output_type": "execute_result"
}
],
"source": [
"# Create a holoviews interactive Curve plot with additional parameters defining the plot aesthetics \n",
"visL1B2 = hv.Curve(wvDF).opts(color='darkgreen', tools=['hover'], height=500, width=400,\n",
" xlim=(np.min(waveform) - 10, np.max(waveform) + 10), ylim=(np.min(zStretch), np.max(zStretch)),\n",
" fontsize={'xticks':10, 'yticks':10,'xlabel':16, 'ylabel': 16, 'title':13}, line_width=2.5, title=f'{str(wfShot)}')\n",
"visL1B2"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [],
"source": [
"# Define which observation to examine by selecting the three indexes prior to the one used above\n",
"index = 465599"
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"# Use rx_sample_count and rx_sample_start_index to identify the location of each waveform\n",
"wfCount = sdsCount[index] # Number of samples in the waveform\n",
"wfStart = int(sdsStart[index] - 1) # Subtract 1, index starts at 0 not 1\n",
"wfShot = gediL1B[f'{beamNames[0]}/shot_number'][index] # Unique Shot Number\n",
"wfLat = gediL1B[f'{beamNames[0]}/geolocation/latitude_bin0'][index] # Latitude\n",
"wfLon = gediL1B[f'{beamNames[0]}/geolocation/longitude_bin0'][index] # Longitude\n",
"latlons[wfShot] = Point(wfLon,wfLat) # Create shapely point and add to dictionary\n",
"\n",
"# Grab the elevation recorded at the start and end of the full waveform capture\n",
"zStart = gediL1B[f'{beamNames[0]}/geolocation/elevation_bin0'][index] # Height of the start of the rx window\n",
"zEnd = gediL1B[f'{beamNames[0]}/geolocation/elevation_lastbin'][index] # Height of the end of the rx window\n",
"\n",
"# Retrieve the waveform sds layer using the sample start index and sample count information to slice the correct dimensions\n",
"waveform = gediL1B[sdsWaveform][wfStart: wfStart + wfCount]"
]
},
{
"cell_type": "code",
"execution_count": 57,
"metadata": {},
"outputs": [],
"source": [
"# Find elevation difference from start to finish, divide into equal intervals\n",
"zStretch = np.add(zEnd, np.multiply(range(wfCount, 0, -1), ((zStart - zEnd) / int(wfCount))))\n",
"\n",
"# match the waveform amplitude values with the elevation and convert to Pandas df\n",
"wvDF = pd.DataFrame({'Amplitude (DN)': waveform, 'Elevation (m)': zStretch})"
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {},
"outputs": [],
"source": [
"# Append waveform and quality layers to be exported later\n",
"latlonsWF.append(waveform)\n",
"latlonsEL.append(zStretch)\n",
"latlonsSRF.append(gediL1B[f'{beamNames[0]}/stale_return_flag'][index])\n",
"latlonsD.append(gediL1B[f'{beamNames[0]}/geolocation/degrade'][index])"
]
},
{
"cell_type": "code",
"execution_count": 59,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.holoviews_exec.v0+json": "",
"text/html": [
"
\n",
""
],
"text/plain": [
":Layout\n",
" .Curve.I :Curve [Amplitude (DN)] (Elevation (m))\n",
" .Curve.II :Curve [Amplitude (DN)] (Elevation (m))\n",
" .Curve.III :Curve [Amplitude (DN)] (Elevation (m))\n",
" .Curve.IV :Curve [Amplitude (DN)] (Elevation (m))"
]
},
"execution_count": 60,
"metadata": {
"application/vnd.holoviews_exec.v0+json": {
"id": "1949"
}
},
"output_type": "execute_result"
}
],
"source": [
"# The \"+\" symbol will plot multiple saved holoviews plots together\n",
"visL1B1.opts(width=240) + visL1B2.opts(width=240, labelled=[]) + visL1B3.opts(width=240, labelled=[]) + \\\n",
"wfVis.opts(width=240, labelled=[])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Notice above moving west to east (left to right) along the transect you can see the elevation lowering and the tree canopy decreasing as it encounters the sandbar/river."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Below, use geoviews to plot the location of the four points to verify what is seen above."
]
},
{
"cell_type": "code",
"execution_count": 61,
"metadata": {},
"outputs": [
{
"ename": "ValueError",
"evalue": "arrays must all be same length",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mValueError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m\u001b[0m\n\u001b[0;32m 2\u001b[0m latlons = gp.GeoDataFrame({'Shot Number': list(latlons.keys()),'rxwaveform Amplitude (DN)': latlonsWF,\n\u001b[0;32m 3\u001b[0m \u001b[1;34m'rxwaveform Elevation (m)'\u001b[0m\u001b[1;33m:\u001b[0m \u001b[0mlatlonsEL\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'Stale Return Flag'\u001b[0m\u001b[1;33m:\u001b[0m \u001b[0mlatlonsSRF\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'Degrade'\u001b[0m\u001b[1;33m:\u001b[0m \u001b[0mlatlonsD\u001b[0m\u001b[1;33m,\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 4\u001b[1;33m 'geometry': list(latlons.values())})\n\u001b[0m",
"\u001b[1;32m~\\AppData\\Local\\Continuum\\anaconda3\\envs\\geditutorial\\lib\\site-packages\\geopandas\\geodataframe.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, *args, **kwargs)\u001b[0m\n\u001b[0;32m 58\u001b[0m \u001b[0mcrs\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mkwargs\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mpop\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"crs\"\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 59\u001b[0m \u001b[0mgeometry\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mkwargs\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mpop\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"geometry\"\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 60\u001b[1;33m \u001b[0msuper\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mGeoDataFrame\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m__init__\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m*\u001b[0m\u001b[0margs\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 61\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 62\u001b[0m \u001b[1;31m# need to set this before calling self['geometry'], because\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32m~\\AppData\\Local\\Continuum\\anaconda3\\envs\\geditutorial\\lib\\site-packages\\pandas\\core\\frame.py\u001b[0m in \u001b[0;36m__init__\u001b[1;34m(self, data, index, columns, dtype, copy)\u001b[0m\n\u001b[0;32m 433\u001b[0m )\n\u001b[0;32m 434\u001b[0m \u001b[1;32melif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mdata\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdict\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 435\u001b[1;33m \u001b[0mmgr\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0minit_dict\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mdata\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mindex\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mcolumns\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mdtype\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 436\u001b[0m \u001b[1;32melif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mdata\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mma\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mMaskedArray\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 437\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mnumpy\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mma\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmrecords\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mmrecords\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32m~\\AppData\\Local\\Continuum\\anaconda3\\envs\\geditutorial\\lib\\site-packages\\pandas\\core\\internals\\construction.py\u001b[0m in \u001b[0;36minit_dict\u001b[1;34m(data, index, columns, dtype)\u001b[0m\n\u001b[0;32m 252\u001b[0m \u001b[0marr\u001b[0m \u001b[1;32mif\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0mis_datetime64tz_dtype\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0marr\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32melse\u001b[0m \u001b[0marr\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mcopy\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0marr\u001b[0m \u001b[1;32min\u001b[0m \u001b[0marrays\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 253\u001b[0m ]\n\u001b[1;32m--> 254\u001b[1;33m \u001b[1;32mreturn\u001b[0m \u001b[0marrays_to_mgr\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0marrays\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdata_names\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mindex\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mcolumns\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mdtype\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 255\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 256\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32m~\\AppData\\Local\\Continuum\\anaconda3\\envs\\geditutorial\\lib\\site-packages\\pandas\\core\\internals\\construction.py\u001b[0m in \u001b[0;36marrays_to_mgr\u001b[1;34m(arrays, arr_names, index, columns, dtype)\u001b[0m\n\u001b[0;32m 62\u001b[0m \u001b[1;31m# figure out the index, if necessary\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 63\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mindex\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 64\u001b[1;33m \u001b[0mindex\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mextract_index\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0marrays\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 65\u001b[0m \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 66\u001b[0m \u001b[0mindex\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mensure_index\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mindex\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;32m~\\AppData\\Local\\Continuum\\anaconda3\\envs\\geditutorial\\lib\\site-packages\\pandas\\core\\internals\\construction.py\u001b[0m in \u001b[0;36mextract_index\u001b[1;34m(data)\u001b[0m\n\u001b[0;32m 363\u001b[0m \u001b[0mlengths\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mlist\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mset\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mraw_lengths\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 364\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mlen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mlengths\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;33m>\u001b[0m \u001b[1;36m1\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 365\u001b[1;33m \u001b[1;32mraise\u001b[0m \u001b[0mValueError\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"arrays must all be same length\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 366\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 367\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mhave_dicts\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;31mValueError\u001b[0m: arrays must all be same length"
]
}
],
"source": [
"# Convert dict to geodataframe\n",
"latlons = gp.GeoDataFrame({'Shot Number': list(latlons.keys()),'rxwaveform Amplitude (DN)': latlonsWF,\n",
" 'rxwaveform Elevation (m)': latlonsEL, 'Stale Return Flag': latlonsSRF, 'Degrade': latlonsD,\n",
" 'geometry': list(latlons.values())})"
]
},
{
"cell_type": "code",
"execution_count": 62,
"metadata": {},
"outputs": [
{
"ename": "KeyError",
"evalue": "'Shot Number'",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mKeyError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mlatlons\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'Shot Number'\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mlatlons\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'Shot Number'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mastype\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mstr\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;31m# Convert shot number from integer to string\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[1;31mKeyError\u001b[0m: 'Shot Number'"
]
}
],
"source": [
"latlons['Shot Number'] = latlons['Shot Number'].astype(str) # Convert shot number from integer to string"
]
},
{
"cell_type": "code",
"execution_count": 63,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['Beam', 'Shot Number', 'Stale Return Flag', 'Degrade']"
]
},
"execution_count": 63,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"vdims"
]
},
{
"cell_type": "code",
"execution_count": 64,
"metadata": {},
"outputs": [
{
"ename": "TypeError",
"evalue": "argument of type 'numpy.uint64' is not iterable",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mTypeError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m\u001b[0m\n\u001b[0;32m 3\u001b[0m \u001b[1;32mfor\u001b[0m \u001b[0mf\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mlatlons\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 4\u001b[0m \u001b[1;32mif\u001b[0m \u001b[0mf\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[1;32min\u001b[0m \u001b[1;33m[\u001b[0m\u001b[1;34m'geometry'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 5\u001b[1;33m \u001b[1;32mif\u001b[0m \u001b[1;34m'rxwaveform'\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mf\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 6\u001b[0m \u001b[0mvdims\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mf\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m 7\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n",
"\u001b[1;31mTypeError\u001b[0m: argument of type 'numpy.uint64' is not iterable"
]
}
],
"source": [
"# Create a list of geodataframe columns to be included as attributes in the output map\n",
"vdims = []\n",
"for f in latlons:\n",
" if f not in ['geometry']:\n",
" if 'rxwaveform' not in f:\n",
" vdims.append(f)\n",
"\n",
"# Plot the geodataframe\n",
"gv.Polygons(redwoodNP).opts(line_color='red', color=None) * pointVisual(latlons, vdims=vdims)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Above, zoom in on the yellow dots until you can get a clearer view of all four shots.\n",
"#### In the screenshots below, we can match the location of each point with the associated waveform. It appears to confirm the hypothesis that the elevation is decreasing as the shots get closer to the river, and the forest canopy decreases until it is detecting the sand bar/river by the final waveform in the sequence.\n",
"\n",
""
]
},
{
"cell_type": "code",
"execution_count": 65,
"metadata": {},
"outputs": [],
"source": [
"del wfVis, visL1B1, visL1B2, visL1B3, waveform, wvDF, zStretch, latlonsWF, latlonsEL, latlonsSRF, latlonsD"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 5. Quality Filtering \n",
"#### Now that you have the desired layers imported as a dataframe for the selected shots, lets perform quality filtering.\n",
"#### Below, remove any shots where the `stale_return_flag` is set to 1 (indicates that a \"stale\" cue point from the coarse search algorithm is being used) by defining those shots as `nan`. \n",
"#### The syntax of the line below can be read as: in the dataframe, find the rows \"where\" the stale return flag is not equal (ne) to 0. If a row (shot) does not meet the condition, set all values equal to `nan` for that row."
]
},
{
"cell_type": "code",
"execution_count": 66,
"metadata": {},
"outputs": [
{
"ename": "AttributeError",
"evalue": "'dict' object has no attribute 'where'",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mAttributeError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mlatlons\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mlatlons\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mwhere\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mlatlons\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'Stale Return Flag'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mne\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;31m# Set any stale returns to NaN\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[1;31mAttributeError\u001b[0m: 'dict' object has no attribute 'where'"
]
}
],
"source": [
"latlons = latlons.where(latlons['Stale Return Flag'].ne(1)) # Set any stale returns to NaN"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Below, quality filter even further by using the `degrade` flag (Greater than zero if the shot occurs during a degrade period, zero otherwise)."
]
},
{
"cell_type": "code",
"execution_count": 67,
"metadata": {},
"outputs": [
{
"ename": "AttributeError",
"evalue": "'dict' object has no attribute 'where'",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mAttributeError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mlatlons\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mlatlons\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mwhere\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mlatlons\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;34m'Degrade'\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mne\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;36m1\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[1;31mAttributeError\u001b[0m: 'dict' object has no attribute 'where'"
]
}
],
"source": [
"latlons = latlons.where(latlons['Degrade'].ne(1))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Below, drop all of the shots that did not pass the quality filtering standards outlined above from the `transectDF`."
]
},
{
"cell_type": "code",
"execution_count": 68,
"metadata": {},
"outputs": [
{
"ename": "AttributeError",
"evalue": "'dict' object has no attribute 'dropna'",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mAttributeError\u001b[0m Traceback (most recent call last)",
"\u001b[1;32m\u001b[0m in \u001b[0;36m\u001b[1;34m\u001b[0m\n\u001b[1;32m----> 1\u001b[1;33m \u001b[0mlatlons\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mlatlons\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdropna\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;31m# Drop all of the rows (shots) that did not pass the quality filtering above\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[1;31mAttributeError\u001b[0m: 'dict' object has no attribute 'dropna'"
]
}
],
"source": [
"latlons = latlons.dropna() # Drop all of the rows (shots) that did not pass the quality filtering above"
]
},
{
"cell_type": "code",
"execution_count": 69,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Quality filtering complete, 4 high quality shots remaining.\n"
]
}
],
"source": [
"print(f\"Quality filtering complete, {len(latlons)} high quality shots remaining.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Good news! It looks like all four of the example waveforms passed the initial quality filtering tests. For additional information on quality filtering GEDI data, be sure to check out: https://lpdaac.usgs.gov/resources/faqs/#how-should-i-quality-filter-gedi-l1b-l2b-data."
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [],
"source": [
"del latlons"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 6. Plot Profile Transects \n",
"#### In this section, plot a transect subset using waveforms."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6.1 Subset Beam Transects"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Subset down to a smaller transect centered on the waveforms analyzed in the sections above."
]
},
{
"cell_type": "code",
"execution_count": 70,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"465599\n"
]
}
],
"source": [
"print(index)"
]
},
{
"cell_type": "code",
"execution_count": 71,
"metadata": {},
"outputs": [],
"source": [
"# Grab 50 points before and after the shot visualized above\n",
"start = index - 50\n",
"end = index + 50 \n",
"transectIndex = np.arange(start, end, 1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6.2 Plot Waveform Transects"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### In order to get an idea of the length of the beam transect that you are plotting, you can plot the x-axis as distance, which is calculated below."
]
},
{
"cell_type": "code",
"execution_count": 72,
"metadata": {},
"outputs": [],
"source": [
"# Calculate along-track distance\n",
"distance = np.arange(0.0, len(transectIndex) * 60, 60) # GEDI Shots are spaced 60 m apart"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### In order to plot each vertical value for each waveform, you will need to reformat the data structure to match what is needed by `holoviews` Path() capabilities. "
]
},
{
"cell_type": "code",
"execution_count": 73,
"metadata": {},
"outputs": [],
"source": [
"# Create a list of tuples containing Shot number, and each waveform value at height (z)\n",
"wfList = []\n",
"for s, i in enumerate(transectIndex):\n",
" # Basic quality filtering from section 5\n",
" if gediL1B['BEAM0110/geolocation/degrade'][i] == 0 and gediL1B['BEAM0110/stale_return_flag'][i] == 0:\n",
" zStart = gediL1B['BEAM0110/geolocation/elevation_bin0'][i]\n",
" zEnd = gediL1B['BEAM0110/geolocation/elevation_lastbin'][i]\n",
" zCount = sdsCount[i]\n",
" zStretch = np.add(zEnd, np.multiply(range(zCount, 0, -1), ((zStart - zEnd) / int(zCount))))\n",
" waveform = gediL1B[sdsWaveform][sdsStart[i]: sdsStart[i] + zCount]\n",
" waves = []\n",
" for z, w in enumerate(waveform):\n",
" waves.append((distance[s], zStretch[z],w)) # Append Distance (x), waveform elevation (y) and waveform amplitude (z)\n",
" wfList.append(waves)\n",
" else:\n",
" print(f\"Shot {s} did not pass quality filtering and will be excluded.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Good news again, it looks like all of the waveforms in our transect passed the quality filtering test.\n",
"\n",
"### Below, plot each waveform by using `holoviews` Path() function. This will plot each individual waveform value by distance, with the amplitude plotted in the third dimension in shades of green."
]
},
{
"cell_type": "code",
"execution_count": 74,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.holoviews_exec.v0+json": "",
"text/html": [
"