Tuesday 30 August 2016

Resolve Short URLs To Their Destination URL with PHP (such as T.co, bit.ly & tinyurl.com)

In a few projects I’ve worked on recently, I’ve had to resolve short URLs to their destination URL. This post will show you how to: Resolve a single URL one level deep. Resolve a URL in a PHP loop until we reach a page that we deem to be the destination. Resolve multiple links obtained from a remote source. Use our own API (with added Malware checks). Update: An updated and far more user friendly version of this post is scheduled here. Like us on Facebook to be notified of when it’s published. Keep in mind that we’re only able to determine the destination page if the redirect is provided in the HTTP header. There are other ways of providing a redirect, and this code doesn’t provide for those situation (for example, a JavaScript redirect). This article should really be read in company with this post, titled “Determine the Status of a Remote Webpage and Retrieve the HTTP Status Code”. Using the header code is a more effective means of determining a redirect. My requirement was very specific to one of my own little projects, so my rant on this page isn’t for everybody. I had intended to provide usage details on a fairly comprehensive API but the post got too long. As such, I’ll provide details soon. Resolving a Single Short URL If you wanted to resolve a single short URL, you could just use my resolveShortURL() function as follows. PHP 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Usage PHP echo resolveShortURL('http://tinyurl.com/internoetics'); 1 echo resolveShortURL('http://tinyurl.com/internoetics'); The following usage will return http://www.internoetics.com/. Information on the above function is included in the more detailed text below. Batch Resolving Short URLs What I’m about to show you is just one way of resolving multiple URLs returned from another source (web page, RSS feed, text file etc). For the purpose of the example, we’ll return an array or URL matches extracted from a Twitter Atom feed with a search term of aviation. For my example I’m using a Twitter feed simply because they’re full of truncated URLs. Keep in mind that when you’re searching Twitter specifically, you can resolve URLs simply by including Tweet Entities in your JSON request. However, it defeats the purpose of what I’m showing you. Read more about Twitter Media Entities Tweet Entities . From Twitter: Why Tweet Entities? Tweet text can potentially mention other users or lists, but also contain URLs, media, hashtags… Instead of parsing the text yourself to try to extract those entities, you can use the entities attribute that contains this parsed and structured data. 1. Retrieving the data The first thing we’ll do is retrieve data from any source: RSS feed, webpage, text file etc. PHP [:space:]]+[[:alnum:]#?\/&=+%_]/", $data, $match); $list = $match[0]; /* Wrap $list in pre tags print_r($list); 1 2 3 4 5 /* Find All Links in Tweets */ preg_match_all("/(http|https|ftp):\/\/[^<>[:space:]]+[[:alnum:]#?\/&=+%_]/", $data, $match); $list = $match[0]; /* Wrap $list in pre tags print_r($list); 3. Resolve the URL The next step requires us to follow each short URL and determine where it takes us. First, the function: PHP /* Resolve Short URL */ function resolveShortURL($url) { $ch = curl_init("$url"); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $yy = curl_exec($ch); curl_close($ch); $w = explode("\n",$yy); /* Want to print the header array? Uncomment below */ // print_r($w); $TheShortURL = in_array_wildcard('Location', $w); $url = $TheShortURL[0]; $url = str_replace("Location:", "", "$url"); $url = trim("$url"); return $url; } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 /* Resolve Short URL */ function resolveShortURL($url) { $ch = curl_init("$url"); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $yy = curl_exec($ch); curl_close($ch); $w = explode("\n",$yy); /* Want to print the header array? Uncomment below */ // print_r($w); $TheShortURL = in_array_wildcard('Location', $w); $url = $TheShortURL[0]; $url = str_replace("Location:", "", "$url"); $url = trim("$url"); return $url; } $w is the header array with redirect info.$TheShortURL returns an array with only the Location.* value.$url is the unshortened destination URL. The problems with the returned header array ($w, in the example above) is that the URL value isn’t returned in a constant position within the array (in the example below the key is 2 but this can vary from server to server). For this reason, we have to build our own little function to return only the value that includes the (partial) Location value. Array Location Value Location URL in returned $w Array To extract the correct value from the header array, we’ll build our own array wildcard search function. Wildcard Search in Array Using PHP’s preg_grep() function, we can match a pattern (i.e. one that includes Location) in the array and return only that match in a new array with a single value. PHP /* Wildcard Array Search */ function in_array_wildcard ( $needle, $arr ) { return array_values( preg_grep( '/' . str_replace( '*', '.*', $needle ) . '/', $arr ) ); } 1 2 3 4 /* Wildcard Array Search */ function in_array_wildcard ( $needle, $arr ) { return array_values( preg_grep( '/' . str_replace( '*', '.*', $needle ) . '/', $arr ) ); } We’ll obtain the Location: $url match by using our array wildcard function as follows. PHP $MatchesArray = in_array_wildcard('Location', $w); 1 $MatchesArray = in_array_wildcard('Location', $w); The Wildcard Array search is a worthy snippet in itself. It could be modified and used as an alternative to the problem I faced with my ‘administrative_area_level_1’ search function as described in my ‘Google State/City’ post. If we were to print the newly returned $MatchesArray, it would return the following: Array ( [0] => Location: http://bit.ly/OVxxxx ) In the same function that we made the cURL request, we simply use $url = trim(str_replace("Location:", "", "$url")); to remove Location: from the returned data, leaving us with just the URL. You could just as easily use an expression to find and return a URL match. For my purposes, I only wanted to iterate over the returned URL matches ($list, from our Twitter search) only two levels deep (since a short URL can often send you to yet another short URL). If you wanted to follow truncated URLs indefinitely, you’d loop over the matches until you identified a URL that wasn’t deemed to be a short URL. If you chose this route you would want to make sure you added protection to avoid an infinite loop (by ceasing requests after ‘n’ attemps or on finding a repeated URL). How do we know if the URL we’re sent to is a short URL? There are a number of ways of doing this. First, we could read the header data from the remote page (using code provided here) and determine if it wants to send us somewhere else. Basically, any 3XX header code suggests some kind of redirect. What I’m personally doing personally is reading the header data for a redirect Location URL and comparing it against an array of top URL shorteners. Keep in mind that my application required this… you probably wouldn’t want to limit yourself to known services unless you had that specific need. I’ve created an array for my own use that contains about 300 URL shorteners (available in the download). Here’s an example with just the better known truncating services. PHP /* Array of top URL shorteners */ $urlArray = array("tiny.cc", "is.gd", "own.ly", "rubyurl.com", "bit.ly", "tinyurl.com", "moourl.com", "cli.gs", "ka.lm", "u.nu", "yep.it", "shrten.com", "miniurl.com", "snipurl.com", "short.ie", "idek.net", "w3t.org", "shiturl.com", "dwarfurl.com", "doiop.com", "smallurl.in", "notlong.com", "fyad.org", "safe.mn", "hex.io", "own.ly", "lnkd.in", "fb.me", "amzn.to", "goo.gl", "j.mp", "mcaf.ee", "lnk.ms", "youtu.be", "wp.me", "fwd4.me", "su.pr", "t.co", "snurl.com", "tr.im", "twurl.cc", "fat.ly"); 1 2 /* Array of top URL shorteners */ $urlArray = array("tiny.cc", "is.gd", "own.ly", "rubyurl.com", "bit.ly", "tinyurl.com", "moourl.com", "cli.gs", "ka.lm", "u.nu", "yep.it", "shrten.com", "miniurl.com", "snipurl.com", "short.ie", "idek.net", "w3t.org", "shiturl.com", "dwarfurl.com", "doiop.com", "smallurl.in", "notlong.com", "fyad.org", "safe.mn", "hex.io", "own.ly", "lnkd.in", "fb.me", "amzn.to", "goo.gl", "j.mp", "mcaf.ee", "lnk.ms", "youtu.be", "wp.me", "fwd4.me", "su.pr", "t.co", "snurl.com", "tr.im", "twurl.cc", "fat.ly"); To determine if the remote page will send us to another URL shortener, I’m using another function to get only the domain name from the full URL address that I compare to the values in the $urlArray. PHP /* Find domain from URL */ function stripit ( $url ) { $url = trim($url); $url = preg_replace("/^(http:\/\/)*(www.)*/is", "", $url); $url = preg_replace("/\/.*$/is" , "" ,$url); return $url; } 1 2 3 4 5 6 7 /* Find domain from URL */ function stripit ( $url ) { $url = trim($url); $url = preg_replace("/^(http:\/\/)*(www.)*/is", "", $url); $url = preg_replace("/\/.*$/is" , "" ,$url); return $url; } For example, http://bit.ly/OVxxxx becomes bit.ly. We then compare bit.ly with the $urlArray to determine if there’s a match. In this particular case, and since there’s a match in $urlArray, we’ll resolve the initial URL. Important: In the case of Twitter, all their URLs are masked by their own t.co shortener. If you were resolving multiple URLs that included non-short URLs, you should compare every returned URL match against our short URL array so you didn’t try and resolve a normal web address (otherwise it’ll return nothing). 4. Iterate over the URL Matches We now loop over all the returned URLs (from the $list array) and apply the function resolveShortURL() to determine where it takes us. For each URL, we’ll apply the stripit() function to return a raw URL, and then we’ll compare that value against our $urlArray. If the URL exists in our known list of shorteners, we’ll resolve that URL as well. In my little example below I’ll print each URL as a link on a new line (with the raw URL as the link text). PHP foreach ($list AS $url_id) { if (!isset($Turl[$url_id])) { $Turl[$url_id]=true; $url_id_1 = resolveShortURL($url_id); $url_id_1_s = stripit($url_id_1); if (in_array("$url_id_1_s", $urlArray)) { $url_id_2 = resolveShortURL($url_id_1); $url_id_2_s = stripit($url_id_2); echo "$url_id -> $url_id_1_s -> $url_id_2_s
"; } else { echo "$url_id -> $url_id_1_s
"; } } } ?> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 foreach ($list AS $url_id) { if (!isset($Turl[$url_id])) { $Turl[$url_id]=true; $url_id_1 = resolveShortURL($url_id); $url_id_1_s = stripit($url_id_1); if (in_array("$url_id_1_s", $urlArray)) { $url_id_2 = resolveShortURL($url_id_1); $url_id_2_s = stripit($url_id_2); echo "$url_id -> $url_id_1_s -> $url_id_2_s
"; } else { echo "$url_id -> $url_id_1_s
"; } } } ?> Of course, if you were searching a source other than a Twitter feed or a known list of truncated URLs, you should first confirm that the URL you’re following is indeed a short URL. Checking against our $urlArray or reading the destination page headers are two good ways of accomplishing this. It’s a fairly intensive process to resolve multiple addresses so you may want to consider putting a limit on the number of URLs you resolve. Resolving URLs in a Loop In my example above, I’ve only resolved the URL two levels deep. What about if you had multiple (inefficient) URLs bouncing around the web before you landed on the actual destination page? For example, if you follow this snipr short URL, it will redirect to shnk.me, ow.ly, bit.ly and, finally, tinyurl before it lands on this site. I use the following code in a function to resolve a short URL for as long as I need to (and as long as the short URL service used is in our $urlArray). Comment out line 13 if you don’t want to print the values on your screen. Again, I only compare against my own array of known shorteners because of the application I was involved with. In reality, you’d follow a link almos indefinitely based entirely on the header that’s provided. PHP "; $url = $resolvedURL; } else { $resolvedURL = $url; $i = false; } } ?> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 "; $url = $resolvedURL; } else { $resolvedURL = $url; $i = false; } } ?> This will create an array of all the redirects. To print the array: PHP /* Wrap in pre tags */ print_r($domArray); 1 2 /* Wrap in pre tags */ print_r($domArray); Returns: PHP Array ( [0] => http://snipr.com/24qhrj5 [1] => http://shnk.me/73553 [2] => http://ow.ly/d8WUp [3] => http://bit.ly/SlrMDl [4] => http://tinyurl.com/internoetics [5] => http://www.internoetics.com ) 1 2 3 4 5 6 7 8 9 Array ( [0] => http://snipr.com/24qhrj5 [1] => http://shnk.me/73553 [2] => http://ow.ly/d8WUp [3] => http://bit.ly/SlrMDl [4] => http://tinyurl.com/internoetics [5] => http://www.internoetics.com ) To get the destination URL, use: PHP echo end($domArray); 1 echo end($domArray); Returns: http://www.internoetics.com. Other Applications I mentioned that I’m using the feature to resolve short URLs on a few different projects – here’s just one. On a Usenet site I’ve mentioned a few times, it’s not uncommon for unscrupulous marketers or those with less than honourable intent to post malicious links. In their raw form, short URL’s don’t offer any protection against those that click on links from unknown sources. What I’ve done (in a new unreleased version of the site) is resolve all ‘known’ short URLs and print them partially to provide some idea of the destination. Short URLs were actually invented for Usenet; more specifically, a random unicycle newsgroup. They’re an intrinsic part of the Usenet experience because they eliminate long and messy URLs wrapping or breaking over multiple lines. In my case, I didn’t want to hijack the short URL but I did want to print the long URL in one form or another as a measure of protection. Using the function provided here I’ll break apart the destination URL into two halves once I’ve determined where it takes us. It means I can construct a link that retains the short URL as the link but replaces the actual link text with a semi-truncated URL. Since I parse all links into active HTML links on my other site, I’m not yet sure if I’ll provide the destination URL as a title tag (so it displays when you mouse over the link) or whether I’ll use the resolved URL (in a very truncated form) as the primary text but retain the link to the short URL. For example, consider this block of text: This is example text only. I’ll include a couple of links like this one http://fat.ly/flight4 and this one http://fat.ly/flight5 – both podcasts from Flight Podcast. Using the first technique I described, the above text would render on the screen like this (note the active link remains a short URL): This is example text only. I’ll include a couple of links like this one http://www.flightpodcast.com/episo…ehran and this one http://www.flightpodcast.com/episo…-king – both podcasts from Flight Podcast. Another option is to simply partially render the destination page in brackets alongside the URL. Either way, it offers some measure of link protection. Removing Linkrot I also want to remove dead or incorrect links on another site should those links no longer exists (or if they are deemed to include malicious code). Using techniques described on this page I can determine at various intervals if the page exists or not. If it doesn’t, I can replace it with plain text indicating why the link isn’t active. Long URL API Since I’m resolving short URLs on so many sites (for both myself and clients), I figured I should build a quick API for easy implementation. I wanted to include, at the very least, the following: Resolve short URL to a final destination, regardless of how many links we have to jump. Return the Long URL. Check the Long URL against the main malware databases – including Google’s constantly updated lists of suspected phishing and malware pages. I wanted to return a truncated long URL link but with the short URL as anchor text (a few people have asked for this). Return the title and, perhaps, other details from the destination page. Provide XML, JSON and text options. I’d ideally like to include a link to an image snapshot of the destination page… but that can wait (as can all the other funky stuff I’d like to include). I’ll post details another time. Shortt URL for this post: Front-End Author Listing And User Search For WordPress This article will guide you through the process of creating a front-end page in WordPress that lists your authors. We’ll discuss why you would want to do this, we’ll introduce the WP_User_Query class, and then we’ll put it it all together. User Engagement And WordPress Link At its core, WordPress is a rock-solid publishing platform. With a beautiful and easy to use interface, and support for custom post types and post formats, publishers have the flexibility to do what they do best: write content. However, WordPress is lacking in social interaction between content authors and readers. BuddyPress is trying to solve this, but I believe it’s going in the wrong direction by trying to be a full-fledged social network. A big phrase in the publishing world is “user engagement.” This is about getting readers to spend more time on the website, actively searching for content and even generating their own. While one could write a few books on the subject, here are a few things a WordPress publisher can do: Create a daily or weekly newsletter, with top stories from selected categories; Provide an editorial-driven open forum in which editors propose themes, stories and questions and readers comment on them; Continue the discussion of articles on social platforms; Encourage users to submit articles and images for contests; Highlight your authors. Listing Authors, And Why It’s A Good Thing Link user-listing1 If you’re a publisher, your authors are your biggest asset. They are the content creators. Their writing gets consumed by millions of people all over the world. Showcasing them exposes them for what they really are: authorities. Your authors will thank you for acknowledging them, and readers will get to see the human face behind the technology. Coding The Perfect Author Listing Link Here are the things we want to achieve with our page: Build it as a WordPress plugin so that we can reuse it more easily; Display the name, biography, number of posts and latest published post of all authors; Paginate the listing if we have many authors; Make the listing searchable. Introducing WP_User_Query And get_users Link The WP_User_Query2 class allows us to query the user database. Besides returning an array of users, WP_User_Query returns general information about the query and, most importantly, the total number of users (for pagination). One can use WP_User_Query by passing a series of arguments and listing the output. $my_authors = new WP_User_Query( array( 'blog_id' => $GLOBALS['blog_id'], 'role' => '', 'meta_key' => '', 'meta_value' => '', 'meta_compare' => '', 'include' => array(), 'exclude' => array(), 'search' => '', 'orderby' => 'login', 'order' => 'ASC', 'offset' => '', 'number' => '', 'count_total' => true, 'fields' => 'all', 'who' => '' )); We’ll focus on only a few arguments, rather than go through all of them: roleThis is the user’s role. In our example, we’ll query for author. offsetThe first n users to be skipped in the returned array. numberLimit the total number of users returned. We also have the get_users3 class, which (like WP_User_Query) returns a number of users based on the parameters set. The important difference between the two is that get_users only returns an array of users and their meta data, whereas WP_User_Query returns extra information such as the total number of users (which is useful when it comes time to paginate). Simple User Listing Using get_users() Link Before moving on with the full user listing, including pagination and search, let’s see get_users in action. If all you need is a simple list of authors, then you could just use wp_list_authors4, like so: wp_list_authors('show_fullname=1&optioncount=1&orderby=post_count&order=DESC&number=3'); Creating A Plugin And Shortcode With A Bit More Functionality Link A simple and straightforward way to build our user listing would be to create a shortcode that we could include on any page we like. Housing this type of functionality in a plugin is ideal, so that we don’t have to worry about migrating it when we change the theme. Let’s keep it simple. Our entire plugin will consist of just one file: simple-user-listing.php. '', "number" => '10' ), $atts)); $role = sanitize_text_field($role); $number = sanitize_text_field($number); // We're outputting a lot of HTML, and the easiest way // to do it is with output buffering from PHP. ob_start(); // Get the Search Term $search = ( isset($_GET["as"]) ) ? sanitize_text_field($_GET["as"]) : false ; // Get Query Var for pagination. This already exists in WordPress $page = (get_query_var('paged')) ? get_query_var('paged') : 1; // Calculate the offset (i.e. how many users we should skip) $offset = ($page - 1) * $number; if ($search){ // Generate the query based on search field $my_users = new WP_User_Query( array( 'role' => $role, 'search' => '*' . $search . '*' )); } else { // Generate the query $my_users = new WP_User_Query( array( 'role' => 'author', 'offset' => $offset , 'number' => $number )); } // Get the total number of authors. Based on this, offset and number // per page, we'll generate our pagination. $total_authors = $my_users->total_users; // Calculate the total number of pages for the pagination $total_pages = intval($total_authors / $number) + 1; // The authors object. $authors = $my_users->get_results(); ?>


No authors found


Breaking Down The Code Link The top of our plugin’s main PHP file must contain the standard header of information. This header tells WordPress that our plugin exists, and it adds it to the plugin management screen so that it can be activated, loaded and run. /* Plugin Name: Simple User Listing Plugin URI: http://cozmoslabs.com Description: Create a simple shortcode to list our WordPress users. Author: Cristian Antohe Version: 0.1 Author URI: http://cozmoslabs.com */ Creating a Shortcode Link Adding a new shortcode in WordPress is rather easy. We find the function that returns the desired output (in our case, sul_user_listing), and then we add it using the add_shortcode WordPress function. function sul_user_listing($atts, $content = null) { // return our output } add_shortcode('userlisting', 'sul_user_listing'); We want to be able to list users based on their roles and to control how many users are displayed on the page. We do this through shortcode arguments. We’ll add the shortcode to our theme in this way: [userlisting role="author" number="15"]. This will allow us to reuse the plugin to list our subscribers as well. To do this, we need to use shortcode arguments: extract(shortcode_atts(array( "role" => '', "number" => '10' ), $atts)); The extract function imports variables into our function from an array. The WordPress function shortcode_atts basically returns an array with our arguments; and we’ll set up some defaults in case none are found. Note that the role default is an empty string, which would list all users regardless of their role. Shortcodes Should Never Echo Stuff Out Link The return value of a shortcode handler function gets inserted into the post content’s output in place of the shortcode. You should use return and not echo; anything that is echoed will be outputted to the browser but will probably appear above everything else. You would also probably get “headers already sent” type of errors. For simplicity, we’re buffering the output through ob_start(), so we put everything into an object and return it once we’re done. Setting Up Our Variables Link Now we can start building our listing of authors. First, we need to set up a few variables: $searchThis takes the GET parameter as if it exists. $pageThe get_query_var for the pagination. This already exists in WordPress. $offsetCalculate the offset (i.e. how many users to skip when paginating). $total_authorsGet the total number of authors. $total_pagesCalculate the total number of pages for the pagination. The Query Link We actually have two queries: the default listing and the search results. if ($search){ // Generate the query based on search field $my_users = new WP_User_Query( array( 'role' => $role, 'search' => '*' . $search . '*' )); } else { // Generate the query $my_users = new WP_User_Query( array( 'role' => 'author', 'offset' => $offset , 'number' => $number )); } WP_User_Query->total_users and WP_User_Query->get_results Link WP_User_Query provides us with two useful functions, among others: total_usersReturns the total number of authors. This, the offset and the number of users per page will generate our pagination. get_resultsReturns an object with the authors alone. This is similar to what get_users() returns. The Search Form Link For the search, we’re using a simple form. There’s nothing complex here.
User Data and Listing the Authors Link Looping through our results is fairly simple. However, getting information about users is a bit confusing in WordPress. You see, there are a lot of ways to get user data. We could get it directly from the returned query; we could use general functions such as get_userdata, get_user_meta, the_author_meta and get_the_author_meta; or we could even use dedicated functions such as the_author_link and the_author_posts. We’ll just use get_userdata plus two other functions: get_author_posts_url and get_avatar.

No authors found

We need pagination because each listing will generate two extra queries. So, if we were listing 100 people, we would end up with 200 extra queries per page. That’s a bit much, so pagination is really needed. Otherwise, for websites with many authors, the load could get so heavy that it brings down the website.
Final Thoughts Link We’ve discussed the code for an authors listing, but it has so many more uses: List your company’s employees; Showcase users who have won a competition (by listing users with the role of “winners”); Present your company’s departments, each with its respective team (based on user roles). If you allow users to register on your website, you could use more or less the same code to generate any listing of users based on your needs. If you require users to log in in order to comment (an effective way to stop automated spam), then listing users and their number of comments could increase engagement. Have you used something similar for a project? If so, let us know in the comments! (al) 1 https://www.smashingmagazine.com/wp-content/uploads/2012/05/user-listing1.jpg 2 http://codex.wordpress.org/Class_Reference/WP_User_Query 3 http://codex.wordpress.org/Function_Reference/get_users 4 http://codex.wordpress.org/Function_Reference/wp_list_authors ↑ Back to top Tweet itShare on Facebook Advertisement NHC Experimental Gridded Marine Forecasts The NHC/TAFB gridded marine forecasts are now available on an experimental basis in the National Digital Forecast Database (NDFD). Gridded forecasts of marine weather elements are available over the TAFB high seas forecast area of responsibility (AOR), which also includes the offshore waters forecast AOR. The gridded marine parameters include: surface (10-m) wind speeds with direction surface (10-m) wind gusts significant wave heights marine hazards These elements are available at a spatial resolution of 10 km for TAFB. The data have an initial temporal resolution of six (6) hours out to 144 hours or six (6) days. Plans are to eventually move toward a temporal resolution of 3 hours for all the marine centers contributing to the NDFD. The grids are produced by forecasters through the AWIPS Graphical Forecast Editor (GFE) and should be available by 0330, 0930, 1530, and 2130 UTC each day. With this implementation, forecasts for these elements are available from NDFD in the following standard methods: Gridded Binary Version 2 (GRIB2) files via Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP) Extensible Markup Language (XML) via Simple Object Access Protocol (SOAP) Graphics via web browser More details regarding this experimental service are available in the Product Description Document from the online catalog of experimental NWS products and services. Please see the public information statement for additional information. View Graphical Forecasts Click on a geographic location of interest to launch the interactive map. The marine forecast images are also available via the experimental NDFD graphical interface. Select the "Oceanic" option to view wind speed, wind gusts, significant wave heights, and hazards for the National Hurricane Center, Ocean Prediction Center, CONUS, Alaska, Hawaii, and Guam areas of responsibility. Access Gridded Data The NDFD oceanic domain covers the Atlantic, Pacific, and Arctic basins for the offices issuing offshore waters and high seas forecasts. The upper right lat, lon for this grid is: 79.99N, 10.71E. The lower left corner lies directly on an NCEP grid 204 point, which coincides with all other Pacific region NDFD grids. The lower left lat, lon for this grid is 30.42S, 129.91E. Specific information on the NDFD grid domains, including the oceanic domain, can be found at http://graphical.weather.gov/docs/ndfdSRS.htm. Technical information on accessing and using NDFD elements can be found at http://ndfd.weather.gov/technical.htm. The GRIB2 marine data on the NDFD oceanic domain can be downloaded at the following locations: Note: Areas of the NDFD oceanic domain that coincide with the NDFD CONUS domain are included in the CONUS grids as well. Use the NDFD technical page to find access to the CONUS GRIB2 files. Comments & Feedback The marine elements will remain experimental until the NWS assesses feedback and completes a technical analysis. At that time, the NWS will determine whether to move these experimental elements to operational status, discontinue them, or revise and extend the experimental feedback period. Comments and feedback on the experimental TAFB Offshore and High Seas NDFD elements, as well as the OPC Offshore elements, are welcome at:http://www.nws.noaa.gov/survey/nws-survey.php?code=EGOSWHSMF General feedback on NDFD GRIB2 service:http://www.weather.gov/survey/nws-survey.php?code=ndfd-grids General feedback on NDFD XML SOAP service:http://www.weather.gov/survey/nws-survey.php?code=xmlsoap General feedback on the experimental NDFD map viewer:http://www.weather.gov/survey/nws-survey.php?code=wxmap

No comments:

Post a Comment