Sunday, May 31, 2020

How Cloud Computing Is Transforming Electronic Design

Cloud computing is changing everything about electronic design, according to Jeff Bier, founder of the Edge AI and Vision Alliance. That’s because more and more problems confronting designers are getting solved in the cloud.

As part of our regularly scheduled calls with EDN’s Editorial Advisory Board, we asked Bier what topics today’s electronics design engineers need more information on. Bier highlighted the cloud as the number one force driving change in engineering departments around the world. However, you could be forgiven for asking whether cloud computing has anything to do with electronic design at all.

“[The cloud] has everything to do with almost every aspect of electronic design,” Bier said, adamant that it is drastically changing the way engineers work.

Code generation
Bier noted that the traditional engineer’s sandbox, Matlab, introduced a feature more than a decade ago that generated code for the embedded target processor in one step. One underappreciated implication of that feature was that any choice of processor might then be influenced by which processors were supported by Matlab for code generation.

Previously, an embedded DSP engineer took the Matlab code from the algorithm engineer and re-coded the entire thing in assembly language (more likely to be C or C++ today). With Matlab’s code generation, this step could be cut out of the process, and time and money could be saved, but only by switching to a processor that was supported by Matlab.

Today, in the realm of AI and deep neural networks, the majority of algorithms are born in the cloud, using open source frameworks like TensorFlow and Pytorch. They are implemented on embedded processors in a variety of ways with a variety of tools.

“You can bet that going forward, a big factor in which are the preferred processors is going to be which processors have the easiest path from [the cloud to the embedded implementation],” Bier said. “Whose cloud has the embedded implementation button and which processors are supported? That’s the cloud that’s going to win. And that’s the embedded processor [that wins]… if it works, people are going to do that, because it’s a heck of a lot easier and faster than writing the code yourself.”

Cloud electronic design
“The cloud has everything to do with almost every aspect of electronic design,” – Jeff Bier

Bier highlights Xnor, the Seattle deep learning company acquired by Apple, which had this process licked.

“They fetched a nice price from Apple, because Apple understands the value of rapid time to market,” he said.

Bier sees many aspects of embedded software heading to the cloud. Many EDA tools are already cloud-based, for example.

“You’ll see similar things where I build my PCB design in the cloud, then who has the “fab me ten prototypes by tomorrow” button?” Bier said.

While many are yet to appreciate the cloud’s significance to the electronic design process, this change is happening fast, partly thanks to the scale of today’s cloud companies.

Bier cites the FPGA players’ historic attempts to make FPGAs easier to program, which he said was finally solved by Microsoft and Amazon, who today offer FPGA acceleration of data parallel code in the cloud.

“All you need is your credit card number… you press the FPGA accelerate button and it just works,” he said. “Microsoft and Amazon solved this problem because they had the scale and the homogeneous environment — the servers are all the same, it’s not like a million and one embedded systems, each slightly different. And they solved problems that [the FPGA players] never could. This is one of the reasons why the cloud is becoming this center of gravity for design and development activity.”

So, what can chip makers do to influence cloud makers to develop code generation functionality for their processors?

“Amazon, Google and Microsoft don’t care whose chip the customer uses, as long as they use their cloud. So [the chip maker is] the only one that cares about making sure that its chip is the one that’s easiest to target,” he said. “So I think they really need both – they really need to work with the big cloud players, but they also need to do their own thing.”

Bier notes that Intel already has its popular DevCloud, a cloud-based environment where developers can build and optimize code.

“It’s the next logical step, where all the tools and development boards are connected to Intel servers,” he said. “There’s no need to wait for anything to install or wait for any boxes to arrive.”

Edge vs. cloud
Another concept that today’s embedded developers really should be well-versed in, Bier said, is edge compute (edge compute refers to any compute done outside the cloud, at the edge of the network). Since more embedded devices now have connectivity as part of the IoT, each system will have to strike a careful balance between what compute is done in the cloud and what is done at the edge for cost, speed or privacy reasons.

“Why do I care, if I’m an embedded systems person? Well, it matters a lot,” Bier said. “If the future is that embedded devices are just dumb data collectors that stream their data to the cloud, that’s frankly a lot less interesting and a lot less valuable than if the future is sophisticated, intelligent embedded devices running AI algorithms and sending findings up to the cloud, but not raw data.”

Bier’s example is a baby monitor company, working on a smart camera to monitor a baby’s movements, breathing and heart rate. Should the intelligence go into the embedded device, or into the cloud?

“Placing the intelligence in the cloud means that if the home internet connection fails, the product doesn’t work,” Bier said. “But by launching the baby monitor [with intelligence in the cloud], they’re able to get the product to market a year faster, as it is then purely a dumb Wi-Fi camera… they didn’t have to build a purpose-built embedded system.”

By keeping intelligence in the cloud, the baby monitor company is also able to iterate its algorithm quickly and easily. Once a reasonable level of deployment is reached, it can do A/B testing overnight: deploy the new algorithm to half their customers, see which algorithm works better, and then deploy to everyone. Deploying a new algorithm can be done with only a few key presses.

“The point is, there are huge implications for what [intelligence] is in the endpoint device, and what’s in some kind of intermediate node, like a device that’s connected to your router or that’s on the operator’s pole down the street, or in the data center,” Bier said. “But this is outside the scope of what most embedded systems people think about today.”

The baby monitor company, following a successful cloud-based launch is now shipping monitors in volume. Bier notes that the company has therefore become more cost-sensitive, and doesn’t need to iterate the algorithm as much, so is looking into building a second-generation product which uses mostly edge processing.

Do you need a DNN?
Another rapidly growing field clearly changing the way embedded systems work is artificial intelligence.

Surveys carried out by the Edge AI and Vision Alliance reveal that deep neural networks (DNNs), the basis for artificial intelligence, have gone from around 20% to around 80% adoption in embedded computer vision systems in the last five years.

“People are struggling with two things,” Bier said. “One is that actually getting them to work for their application is really hard. The other thing is figuring out where they should actually use deep neural networks.”

DNNs have become fashionable and everyone wants to use them, but they are not necessarily the best solution for many problems, Bier noted. For many, classical techniques are still a better fit.

“How would you recognize a problem that is suitable for solving with deep neural networks versus other classical techniques?” Bier said. “Embedded systems people, hardware and software people, really need to have a better grip on this. Because if you’re going to run deep neural networks, it has a big impact on your hardware — you need a tremendous amount of performance and memory compared to classical, hand engineered algorithms.”

The post How Cloud Computing Is Transforming Electronic Design appeared first on EE Times Asia.



from EE Times Asia https://ift.tt/2Mg7RIm

Digital Twins Bridge the Data Gap for Deep Learning

In today’s world, data is king. The most highly valued companies in the world, whether Amazon, Apple, Facebook, Google, Walmart, or Netflix, have one thing in common: data is their most valuable asset. All of these companies have put that data to work using deep learning (DL). No matter what business you’re in, your data is your most valuable asset. You need to protect that asset by doing your own DL. The most important ingredient for DL success is having enough of the right kinds of data. That’s where digital twins come in.

A digital twin is a digital replica of an actual physical process, system, or device. Most importantly, digital twins can be thekey to success for DL projects — especially DL projects that involve processes that are dangerous, expensive, or time-consuming.

The promise of deep learning

By now, nearly every industry — including semiconductor manufacturing — has recognized the potential of DL to create strategic advantage. DL employs neural networks to perform advanced pattern-matching. DL has been applied to such varied fields as facial and speech recognition, medical image analysis, bioinformatics, and materials inspection. In semiconductor manufacturing, DL has already been applied in areas such as defect classification. Most leading companies are scrambling to gain an advantage on this promising new playing field.

As companies start to explore DL and how it can help them, many are finding two things: first, it’s easy to get to a DL prototype, and second, it’s harder to get from “good prototype” results to “production-quality” results. With all of the low- to no-cost DL platforms, tools, and kits available today, initial development for DL applications is very quick and relatively easy in comparison to conventional application development. However, productizing DL applications isn’t any easier — and can be harder — than productizing conventional applications. The reason for this is data. Having enough data — and enough of the right kinds of data — is very often the difference between a DL application that doesn’t deliver production-quality results and one that revolutionizes the way you approach a particular problem.

The DL data gap

DL is based on pattern-matching, which is “programmed” by presenting neural networks with data that represent a target to be matched. Masses of data train a network to recognize the target (and to know when it’s not the target).

DL is incredibly powerful for quickly producing prototypes and providing proof-of-concepts. But the real advantage of DL isn’t the speed of development — it’s the fact that it unlocks the power of data to do things that can’t be done any other way.

The success of any DL application depends on the depth and breadth of the data set used in training. If the training data set is too small, too narrow, or too “normal,” a DL approach will not do better than standard techniques — in fact, it might do worse. It’s important to train a network with data representing all important states or presentations, in sufficient volumes for the network to learn to capture the correct essence of the problem at hand.

The difficulty for some fields, such as autonomous driving or semiconductor manufacturing, is that some of the most serious anomalous conditions occur (thankfully) very rarely. However, if you want a DL application to recognize a child darting in front of a car — or a fatal photomask error — you have to train the networks with a multitude of these scenarios, which don’t exist in any great volume in the real world (Figure 1). Digital twins are the only way to create enough anomalous data to properly train the networks to recognize these conditions.

Figure 1. Illustration of a normal distribution curve with standard deviation. In semiconductor manufacturing, as with driving, “outlier” events are very rare, but neural networks must be trained as much with them because worst-case incidents result in chip failure; overall average performance isn’t good enough.

Digital twins bridge the gap

Digital twins — virtual representations of actual processes, systems, and devices — are a key tool for creating the right amount of the right kind of data to train DL networks successfully. Last July, I was part of a TechTALK session at SEMICON West 2019 hosted by Dave Kelf of Breker Verification Systems, Inc., titled, “Applied AI in Design-to-Manufacturing.” In this panel session, I outlined the concept of using digital twins in semiconductor manufacturing. You can read an article covering this panel, written by the late and sorely missed Randy Smith for Semiwiki.

There are several reasons to use digital twins to create DL training data:

  • You may be in a position where the data you work with belongs to your customers, so you can’t use it for DL training.
  • You may be in a position where the resources you need to create the data you need for DL are fully committed to customer projects.
  • You have developed DL applications but have found that you need specific data to tune and train your neural networks to reach the required level of accuracy, but the cost of using mask shop/fab resources to create the data is prohibitive.
  • You know that you will not be able to find enough anomalous data to train your DL networks adequately. This last case is nearly universal.

Ideally, to maintain full control over the data, you need three digital twins: a digital twin of the process/equipment that precedes yours in the manufacturing flow to provide input data for the simulation of your own process; a digital twin of your own process/equipment; and a digital twin of the process/equipment that follows yours in the manufacturing flow so that you can feed your output downstream for validation.

At the 2019 SPIE Photomask Technology conference, D2S presented a paper1 demonstrating the creation of two digital twins — a scanning electron microscope (SEM) digital twin, and a curvilinear inverse lithography technology (ILT) digital twin — using DL techniques (Figure 2 shows the output of the SEM digital twin). While the output of digital twins in general is not accurate enough for manufacturing, these digital twins have been used both for training DL neural networks and validation. Importantly, these digital twins were generated by DL, rather than through simulation. This is an example of using DL as a tool to generate data needed to do other DL, and it demonstrates the compounding benefits of investing in DL.

Figure 2. Two examples of mask SEM images generated by the SEM digital twin and the real SEM image. The image intensity on a horizontal cutline at the same location are shown as well. Not only do the images look very similar, but the signal response on edges are similar as well.

A roadmap to DL success

All of this may sound like a lot of work — why not use a consulting company that will do DL for you? Because, remember, data is king! Protect that data and do DL yourself. Thankfully, there is an established path to success for you to follow.

First, you need to identify a project where DL will have an impact. You do need to choose carefully — DL is pattern-matching, so you need to pick something that falls into that realm. Image-based applications, such as defect categorization are obvious matches. Less obvious, but very powerful, is an application such as automatic discovery from machine logs. All of the equipment in the fab creates masses of operational data, which is rarely referenced until something goes wrong. Instead of using this valuable data merely as a diagnostic tool after the fact, you could monitor this data across the fab on an ongoing basis and train DL applications to flag patterns that precede problems, so you can identify and correct issues before they have impact, saving downtime.

Mycronic, for example, disclosed during an eBeam Initiative lunchtime talk at the 2020 SPIE Advanced Lithography Conference how the company put DL to work using data from its machine log files to predict anomalies like “mura” (uneven brightness effects that are annoying to the human eye, but that are notoriously difficult for image-processing algorithms to detect) on flat-panel display (FPD) masks.

In general, tedious and error-prone processes that human operators perform, but that are difficult to automate with traditional algorithms, are good candidates for deep learning.  Whether through visual inspection or otherwise, typically in these problems, a human professional examining a specific situation would have a high probability of correctly performing the task. But presented with many instances of similar situations, humans make mistakes and become increasingly unreliable. DL, given one particular situation, may not do as well as a human can. But its probability of success for one situation extends to unlimited instances over unlimited time with the same probability of success. Humans make more mistakes as the volume of situations and/or time executing the task increases; DL’s probability of success does not degrade over volume or time.

Help to bridge the gap to DL success

Once you’ve identified a DL project, there are various resources available that can put you on the path to success while still enabling you to maintain strict control of your own data. If you’re new to DL and would like comprehensive support for your pilot DL project(s), you can join the Center for Deep Learning in Electronics Manufacturing (CDLe, www.cdle.ai), an alliance of industry leaders designed to pool talent and resources to advance the state-of-the-art in DL for our unique problem space and to accelerate the adoption of DL in each of our company’s products to improve our respective offerings for our customers.

If you’ve already started down the road with your DL projects but have encountered issues due to the DL data gap, D2S can help you to build the digital twins you need to augment and tune your data sets for DL success.

﹣Aki Fujimura is chairman and CEO at D2S

The post Digital Twins Bridge the Data Gap for Deep Learning appeared first on EE Times Asia.



from EE Times Asia https://ift.tt/2XlwZnw

Analog IC Market Goes on Rollercoaster Ride

Industrial applications, smartphones and other consumer electronics devices along with automotive use cases are driving an otherwise shrinking analog IC market that saw declining 2019 sales for all but one of the top suppliers.

U.S.-China trade frictions were among the reasons for global sales declines, IC Insights reported. In some regions, especially Europe, the automotive sector is cushioning the analog market from steeper declines.

The analyst said market leader Texas Instruments retained its grip on the analog IC sector, accounting for 19 percent of global sales. That 1-percentage point increased over the previous year is more than the combined market share for competitors Analog Devices (10 percent) and Infineon (7 percent).

Still, TI’s year-on-year sales declined 5 percent, totaling $10.22 billion. The market tracker pegged the global analog IC market at $55.2 billion in 2019, up from just under $51 billion the previous year.

Source: IC Insights

Of the top ten analog IC vendors in IC Insights rankings released this week, only Microchip registered annual sales and market share growth. The Arizona chip maker’s revenues grow 10 percent year-on-year to $1.53 billion while its market share ticked up one percentage point.

The sales and market share increases were attributed to completion of Microchip’s acquisition of Microsemi in 2018, “which helped provide it with a nice boost to full-year analog sales in 2019,” IC Insights noted.

Elsewhere, Infineon remains the top European supplier of analog chips on the strength of its automotive, power management and industrial sales. Automotive accounted for 44 percent of Infineon’s 2019 sales, followed by power management (30 percent). For all categories, Infineon’s 2019 analog sales slipped a single percentage point to $3.75 billion.

Among the biggest decliners last year were Skyworks Solutions, Maxim and ON Semi, all of which suffered a 13-percent decline in annual analog sales.

IC Insights attributed much of No. 5 Skyworks’ decline to ongoing U.S.-China trade frictions, noting that the chip maker’s manufacturing customers have a “large presence in China.” Most make smartphones along with other communications equipment and computing components.

As trade relations worsen, the market watcher noted that Skyworks has ramped up its analog chipset portfolio in anticipation of the launch of 5G smartphones and other infrastructure applications based on the next-generation wireless standard.

Meanwhile, second-ranked Analog Devices said it is focusing on “sensor-to-cloud” applications for edge computing and industrial Internet of Things applications. Industrial applications accounted for half of ADI’s 2019 sales, which totaled $5.16 billion.

The post Analog IC Market Goes on Rollercoaster Ride appeared first on EE Times Asia.



from EE Times Asia https://ift.tt/2XjfK6j

NXP Shareholders Appoints Kurt Sievers as CEO

Further to its announcement in March, NXP Semiconductors’ shareholders today approved the appointment of Kurt Sievers as its chief executive officer. He takes over from Rick Clemmer, who had led the company for 11 years and will remain a strategic advisor.

Since September 2018, Sievers has been the president of NXP, with direct oversight and management of all NXP’s business lines. He joined NXP (then Philips Semiconductors) in 1995, and moved through a series of marketing and sales, product definition and development, strategy and general management positions across a number of market segments. He became a member of the executive management team in 2009 and was instrumental in the definition and implementation of NXP’s high-performance mixed-signal strategy. In 2015, he was influential in the merger of NXP and Freescale Semiconductors, which resulted in creating NXP’s prominent role in automotive semiconductors and secure edge processing.

Commenting on his appointment, Sievers, said, “While we face unprecedented times, I remain confident in our winning strategy to develop and profitably grow market leading and highly differentiated businesses, and continue to foster a culture of innovation and collaboration. I look forward to continuing to work alongside the very best and brightest team and I am committed to ensure the safety and well-being of each and every one of our employees as we weather the pandemic. I could not be prouder of how we have adapted and stayed focused in these times, and I am confident that we will emerge from this stronger, leading NXP into its promising future.”

The company’s chairman, Peter Bonfield, said that Sievers had proven himself exceptionally qualified to lead NXP into its next chapter. “His expertise across business segments, passion for innovation and connections with NXP investors, customers, and employees around the world make Kurt the right leader to continue and build on the company’s successful strategy for years to come.”

The post NXP Shareholders Appoints Kurt Sievers as CEO appeared first on EE Times Asia.



from EE Times Asia https://ift.tt/36QhnLU

Let’s make art at home this week

Digital Making at Home: Make art

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspbe…

Digital Making at Home is a program which encourages young people to code and share along with us, featuring weekly themed content, code-along videos, livestreams, and more!

This week, we’re exploring making art with code. Many young makers are no stranger to making art, especially the digital kind! This week we’re inviting them to bring their most colourful and imaginative ideas to life with code.

So this week for Digital Making at Home, let’s make some art!

The post Let’s make art at home this week appeared first on Raspberry Pi.



from Raspberry Pi Blog – Raspberry Pi https://ift.tt/2XhphdS

Let’s make art at home this week

Digital Making at Home: Make art

Subscribe to our YouTube channel: http://rpf.io/ytsub Help us reach a wider audience by translating our video content: http://rpf.io/yttranslate Buy a Raspbe…

Digital Making at Home is a program which encourages young people to code and share along with us, featuring weekly themed content, code-along videos, livestreams, and more!

This week, we’re exploring making art with code. Many young makers are no stranger to making art, especially the digital kind! This week we’re inviting them to bring their most colourful and imaginative ideas to life with code.

So this week for Digital Making at Home, let’s make some art!

The post Let’s make art at home this week appeared first on Raspberry Pi.



from Raspberry Pi https://ift.tt/2XhphdS

What is open source project governance?

Collaboration starts with communication amongst teams and across silos

In many discussions of open source projects and community governance, people tend to focus on activities or resources like "speaking for the project" or "ownership of the web domain." While documenting these things is useful, they aren't truly governance matters. Alternately, others focus exclusively on technical matters like election rules, codes of conduct, and release procedures. While these might be the tools of governance, they're not governance itself.

So what exactly is open source project governance?


read more

from Opensource.com https://ift.tt/2AmW9Jd

Saturday, May 30, 2020

SET OF 100 BLUE LEDs

100 pcs Ultra Bright 5mm Round LED Diode, color Blue. Current 5-20 mA. Go to Open-Electronics Store

The post SET OF 100 BLUE LEDs appeared first on Open Electronics. The author is Boris Landoni



from Open Electronics https://ift.tt/3cgEyjv

SET OF 100 RED LEDs

100 pcs Ultra Bright 5mm Round LED Diode, color red. Current 5-20 mA. Go to Open-Electronics Store

The post SET OF 100 RED LEDs appeared first on Open Electronics. The author is Boris Landoni



from Open Electronics https://ift.tt/36Jb64s

How open standards guide us in a world of change

Globe up in the clouds

As I write this article in my home office in Beaverton, Oregon, a Portland suburb, I'm relying (and reflecting) on years of work that went into standards like TCP/IP, HTTP, NTP, XMPP, SAML, and many others, as well as open source implementations of these standards from organizations such as the Apache Software Foundation. The combination of these standards and technologies is literally saving lives, as many of us are able to work from home while "flattening the curve."


read more

from Opensource.com https://ift.tt/3ce6Tqr

Make Your Own Smart Wireless Biometric Lock

Traditional key-based locks are now outdated and come with limitations of their own. First, you always have to carry the key with you, and the key might get misplaced or stolen. Such locks are not completely secure either. These old locks are now being replaced by modern biometric ones. However, biometric locks use biometric sensors […]

The post Make Your Own Smart Wireless Biometric Lock appeared first on Electronics For You.



from Electronics For You https://ift.tt/2ZSUl5o

Udoo Bolt Gear mini-PC launches with Ryzen V1000 Udoo Bolt SBC

Seco has launched a $399 “Udoo Bolt Gear” mini-PC kit built around its Ryzen Embedded V1000 based Udoo Bolt SBC. The $399 kit includes a metal case, 65W adapter, and a WiFi/BT M.2 card. A growing number of open-spec, community-backed SBCs ship with optional. and in some cases, standard enclosures, but most of these are […]

from LinuxGizmos.com https://ift.tt/2XcTTgH

Friday, May 29, 2020

Latest Raspberry Pi OS update – May 2020

Along with yesterday’s launch of the new 8GB Raspberry Pi 4, we launched a beta 64-bit ARM version of Debian with the Raspberry Pi Desktop, so you could use all those extra gigabytes. We also updated the 32-bit version of Raspberry Pi OS (the new name for Raspbian), so here’s a quick run-through of what has changed.

NEW Raspberry Pi OS update (May 2020)

An update to the Raspberry Pi Desktop for all our operating system images is also out today, and we’ll have more on that in tomorrow’s blog post. For now, fi…

Bookshelf

As many of you know, we have our own publishing company, Raspberry Pi Press, who publish a variety of magazines each month, including The MagPi, HackSpace magazine, and Wireframe. They also publish a wide range of other books and magazines, which are released either to purchase as a physical product (from their website) or as free PDF downloads.

To make all this content more visible and easy to access, we’ve added a new Bookshelf application – you’ll find it in the Help section of the main menu.

Bookshelf shows the entire current catalogue of free magazines – The MagPi, HackSpace magazine and Wireframe, all with a complete set of back issues – and also all the free books from Raspberry Pi Press. When you run the application, it automatically updates the catalogue and shows any new titles which have been released since you last ran it with a little “new” flash in the corner of the cover.

To read any title, just double-click on it – if it is already on your Raspberry Pi, it will open in Chromium (which, it turns out, is quite a good PDF viewer); if it isn’t, it will download and then open automatically when the download completes. You can see at a glance which titles are downloaded and which are not by the “cloud” icon on the cover of any file which has not been downloaded.

All the PDF files you download are saved in the “Bookshelf” directory in your home directory, so you can also access the files directly from there.

There’s a lot of excellent content produced by Raspberry Pi Press – we hope this makes it easier to find and read.

Edit – some people have reported that Bookshelf incorrectly gives a “disk full” error when running on a system in which the language is not English; a fix for that is being uploaded to apt at the moment, so updating from apt (“sudo apt update” followed by “sudo apt upgrade”) should get the fixed version.

Magnifier

As mentioned in my last blog post (here), one of the areas we are currently trying to improve is accessibility to the Desktop for people with visual impairments. We’ve already added the Orca screen reader (which has had a few bug fixes since the last release which should make it work more reliably in this image), and the second recommendation we had from AbilityNet was to add a screen magnifier.

This proved to be harder than it should have been! I tried a lot of the existing screen magnifier programs that were available for Debian desktops, but none of them really worked that well; I couldn’t find one that worked the way the magnifiers in the likes of MacOS and Ubuntu did, so I ended up writing one (almost) from scratch.

To install it, launch Recommended Applications in the new image and select Magnifier under Universal Access. Once it has installed, reboot.

You’ll see a magnifying glass icon at the right-hand end of the taskbar – to enable the magnifier, click this icon, or use the keyboard shortcut Ctrl-Alt-M. (To turn the magnifier off, just click the icon again or use the same keyboard shortcut.)

Right-clicking the magnifier icon brings up the magnifier options. You can choose a circular or rectangular window of whatever size you want, and choose by how much you want to zoom the image. The magnifier window can either follow the mouse pointer, or be a static window on the screen. (To move the static window, just drag it with the mouse.)

Also, in some applications, you can have the magnifier automatically follow the text cursor, or the button focus. Unfortunately, this depends on the application supporting the required accessibility toolkit, which not all applications do, but it works reasonably well in most included applications. One notable exception is Chromium, which is adding accessibility toolkit support in a future release; for now, if you want a web browser which supports the accessibility features, we recommend Firefox, which can be installed by entering the following into a terminal window:

sudo apt install firefox-esr

(Please note that we do not recommend using Firefox on Raspberry Pi OS unless you need accessibility features, as, unlike Chromium, it is not able to use the Raspberry Pi’s hardware to accelerate video playback.)

I don’t have a visual impairment, but I find the magnifier pretty useful in general for looking at the finer details of icons and the like, so I recommend installing it and having a go yourself.

User research

We already know a lot of the things that people are using Raspberry Pi for, but we’ve recently been wondering if we’re missing anything… So we’re now including a short optional questionnaire to ask you, the users, for feedback on what you are doing with your Raspberry Pi in order to make sure we are providing the right support for what people are actually doing.

This questionnaire will automatically be shown the first time you launch the Chromium browser on a new image. There are only four questions, so it won’t take long to complete, and the results are sent to a Google Form which collates the results.

You’ll notice at the bottom of the questionnaire there is a field which is automatically filled in with a long string of letters and numbers. This is a serial number which is generated from the hardware in your particular Raspberry Pi which means we can filter out multiple responses from the same device (if you install a new image at some point in future, for example). It does not allow us to identify anything about you or your Raspberry Pi, but if you are concerned, you can delete the string before submitting the form.

As above, this questionnaire is entirely optional – if you don’t want to fill it in, just close Chromium and re-open it and you won’t see it again – but it would be very helpful for future product development if we can get this information, so we’d really appreciate it if as many people as possible would fill it in.

Other changes

There is also the usual set of bug fixes and small tweaks included in the image, full details of which can be found in the release notes on the download page.

One particular change which it is worth pointing out is that we have made a small change to audio. Raspberry Pi OS uses what is known as ALSA (Advanced Linux Sound Architecture) to control audio devices. Up until now, both the internal audio outputs on Raspberry Pi – the HDMI socket and the headphone jack – have been treated as a single ALSA device, with a Raspberry Pi-specific command used to choose which is active. Going forward, we are treating each output as a separate ALSA device; this makes managing audio from the two HDMI sockets on Raspberry Pi 4 easier and should be more compatible with third-party software. What this means is that after installing the updated image, you may need to use the audio output selector (right-click the volume icon on the taskbar) to re-select your audio output. (There is a known issue with Sonic Pi, which will only use the HDMI output however the selector is set – we’re looking at getting this fixed in a future release.)

Some people have asked how they can switch the audio output from the command line without using the desktop. To do this, you will need to create a file called .asoundrc in your home directory; ALSA looks for this file to determine which audio device it should use by default. If the file does not exist, ALSA uses “card 0” – which is HDMI – as the output device. If you want to set the headphone jack as the default output, create the .asoundrc file with the following contents:

defaults.pcm.card 1
defaults.ctl.card 1

This tells ALSA that “card 1” – the headphone jack – is the default device. To switch back to the HDMI output, either change the ‘1’s in the file to ‘0’s, or just delete the file.

How do I get it?

The new image is available for download from the usual place: our Downloads page.

To update an existing image, use the usual terminal command:

sudo apt update
sudo apt full-upgrade

To just install the bookshelf app:

sudo apt update
sudo apt install rp-bookshelf

To just install the magnifier, either find it under Universal Access in Recommended Software, or:

sudo apt update
sudo apt install mage

You’ll need to add the magnifier plugin to the taskbar after installing the program itself. Once you’ve installed the program and rebooted, right-click the taskbar and choose Add/Remove Panel Items; click Add, and select the Magnifier option.

We hope you like the changes — as ever, all feedback is welcome, so please leave a comment below!

The post Latest Raspberry Pi OS update – May 2020 appeared first on Raspberry Pi.



from Raspberry Pi Blog – Raspberry Pi https://ift.tt/2ZZzYUr

Newly Released Raspberry Pi 4 With 8GB Of RAM

An increased memory capacity makes the board ideal for enhanced processing of data-intensive applications Retains all the essential features of the already available Raspberry Pi 4 boards Simply within a year since its launch, the Raspberry Pi 4 witnessed a huge jump in sales, thanks to the many enhancements it underwent such as reduced idle […]

The post Newly Released Raspberry Pi 4 With 8GB Of RAM appeared first on Electronics For You.



from Electronics For You https://ift.tt/2M7esVx

Level Playing Shield

Hello everyone, and welcome back to another Friday Product Post here at SparkFun Electronics. This week we have revised versions of our Qwiic shields for Thing Plus and Arduino Nano to include headers with every order, a new version of the popular Raspberry Pi LoRa Gateway, and a simple 2x18 header. Let's jump in and take a closer look.

SparkFun Qwiic Shield for Thing Plus

SparkFun Qwiic Shield for Thing Plus

DEV-16790
$3.95

The SparkFun Qwiic Shield for Thing Plus is a quick and easy way to enter into SparkFun's Qwiic ecosystem with your Thing Plus or Feather boards. Since the Thing Plus and Feather footprints are interchangeable, you can use this shield with any Arduino development board that uses the two! This shield connects the I2C bus (GND, 3.3V, SDA and SCL) on your Thing Plus to four SparkFun Qwiic connectors (two mounted horizontally and two vertically). The Qwiic ecosystem allows for easy daisy-chaining so, as long as your devices are on different addresses, you can connect as many Qwiic devices as you'd like.

This shield now comes with headers!


SparkFun Qwiic Shield for Arduino Nano

SparkFun Qwiic Shield for Arduino Nano

DEV-16789
$3.95

The SparkFun Qwiic Shield for Arduino Nano is very similar to the Thing Plus version above, but instead of being Feather-compatible, it is meant to attach on top of Arduino Nano boards. Also, like the Thing Plus Shield, the Arduino Nano version comes with headers.


LoRa Raspberry Pi 4 Gateway with Enclosure

LoRa Raspberry Pi 4 Gateway with Enclosure

WRL-16447
$199.95

The LoRa Raspberry Pi 4 Gateway is a professional-grade gateway with the hacker in mind. While other low cost gateways are single channel, the LoRa Raspberry Pi Gateway comes with a fully assembled, heat-sinked concentrator capable of multi-channel, multi-node communication, all running in a friendly, hackable Raspberry Pi environment.


Female Header - 2x18

Female Header - 2x18

PRT-16581
$0.95

This is a 2x18-pin female header. Each pin has a spacing of 0.1 inches.


That's it for this week! As always, we can't wait to see what you make! Shoot us a tweet @sparkfun, or let us know on Instagram or Facebook. We’d love to see what projects you’ve made!

Never miss a new product!

comments | comment feed



from SparkFun: Commerce Blog https://ift.tt/2B8KIFj

What’s Inside A Servo Motor ? How It Works?

Servo motorOften one gets to hear the word “Servo Motor” in electronics. You might have used it many times in your projects as well. But what is inside it and how does it work? Let’s find out. Servo motor is basically a type of motor that allows us to control the position, acceleration and velocity while […]

The post What’s Inside A Servo Motor ? How It Works? appeared first on Electronics For You.



from Electronics For You https://ift.tt/2Xgcpox

The Rise Of AI And Its Impact

There is no accepted or standard definition of good artificial intelligence (AI). However, good AI is one that can guide users understand various options, explain tradeoffs among multiple possible choices and then help make those decisions. Good AI will always honour the final decision made by humans. It is a common phenomenon that if you […]

The post The Rise Of AI And Its Impact appeared first on Electronics For You.



from Electronics For You https://ift.tt/3ey2aBs

Software Configurable Industrial I/O Modules for Control and Automation

Provides industrial operators flexibility to work quickly and remotely without extensive re-wiring Optimal for adapting to the changes brought about by Industry 4.0 Traditional control systems require costly and labour-intensive manual configuration, with a complex array of channel modules, analogue and digital signal converters and individually wired inputs/outputs to communicate with the machines, instruments and […]

The post Software Configurable Industrial I/O Modules for Control and Automation appeared first on Electronics For You.



from Electronics For You https://ift.tt/2ZMYRCz

Latest Raspberry Pi OS update – May 2020

Along with yesterday’s launch of the new 8GB Raspberry Pi 4, we launched a beta 64-bit ARM version of Debian with the Raspberry Pi Desktop, so you could use all those extra gigabytes. We also updated the 32-bit version of Raspberry Pi OS (the new name for Raspbian), so here’s a quick run-through of what has changed.

NEW Raspberry Pi OS update (May 2020)

An update to the Raspberry Pi Desktop for all our operating system images is also out today, and we’ll have more on that in tomorrow’s blog post. For now, fi…

Bookshelf

As many of you know, we have our own publishing company, Raspberry Pi Press, who publish a variety of magazines each month, including The MagPi, HackSpace magazine, and Wireframe. They also publish a wide range of other books and magazines, which are released either to purchase as a physical product (from their website) or as free PDF downloads.

To make all this content more visible and easy to access, we’ve added a new Bookshelf application – you’ll find it in the Help section of the main menu.

Bookshelf shows the entire current catalogue of free magazines – The MagPi, HackSpace magazine and Wireframe, all with a complete set of back issues – and also all the free books from Raspberry Pi Press. When you run the application, it automatically updates the catalogue and shows any new titles which have been released since you last ran it with a little “new” flash in the corner of the cover.

To read any title, just double-click on it – if it is already on your Raspberry Pi, it will open in Chromium (which, it turns out, is quite a good PDF viewer); if it isn’t, it will download and then open automatically when the download completes. You can see at a glance which titles are downloaded and which are not by the “cloud” icon on the cover of any file which has not been downloaded.

All the PDF files you download are saved in the “Bookshelf” directory in your home directory, so you can also access the files directly from there.

There’s a lot of excellent content produced by Raspberry Pi Press – we hope this makes it easier to find and read.

Magnifier

As mentioned in my last blog post (here), one of the areas we are currently trying to improve is accessibility to the Desktop for people with visual impairments. We’ve already added the Orca screen reader (which has had a few bug fixes since the last release which should make it work more reliably in this image), and the second recommendation we had from AbilityNet was to add a screen magnifier.

This proved to be harder than it should have been! I tried a lot of the existing screen magnifier programs that were available for Debian desktops, but none of them really worked that well; I couldn’t find one that worked the way the magnifiers in the likes of MacOS and Ubuntu did, so I ended up writing one (almost) from scratch.

To install it, launch Recommended Applications in the new image and select Magnifier under Universal Access. Once it has installed, reboot.

You’ll see a magnifying glass icon at the right-hand end of the taskbar – to enable the magnifier, click this icon, or use the keyboard shortcut Ctrl-Alt-M. (To turn the magnifier off, just click the icon again or use the same keyboard shortcut.)

Right-clicking the magnifier icon brings up the magnifier options. You can choose a circular or rectangular window of whatever size you want, and choose by how much you want to zoom the image. The magnifier window can either follow the mouse pointer, or be a static window on the screen. (To move the static window, just drag it with the mouse.)

Also, in some applications, you can have the magnifier automatically follow the text cursor, or the button focus. Unfortunately, this depends on the application supporting the required accessibility toolkit, which not all applications do, but it works reasonably well in most included applications. One notable exception is Chromium, which is adding accessibility toolkit support in a future release; for now, if you want a web browser which supports the accessibility features, we recommend Firefox, which can be installed by entering the following into a terminal window:

sudo apt install firefox-esr

(Please note that we do not recommend using Firefox on Raspberry Pi OS unless you need accessibility features, as, unlike Chromium, it is not able to use the Raspberry Pi’s hardware to accelerate video playback.)

I don’t have a visual impairment, but I find the magnifier pretty useful in general for looking at the finer details of icons and the like, so I recommend installing it and having a go yourself.

User research

We already know a lot of the things that people are using Raspberry Pi for, but we’ve recently been wondering if we’re missing anything… So we’re now including a short optional questionnaire to ask you, the users, for feedback on what you are doing with your Raspberry Pi in order to make sure we are providing the right support for what people are actually doing.

This questionnaire will automatically be shown the first time you launch the Chromium browser on a new image. There are only four questions, so it won’t take long to complete, and the results are sent to a Google Form which collates the results.

You’ll notice at the bottom of the questionnaire there is a field which is automatically filled in with a long string of letters and numbers. This is a serial number which is generated from the hardware in your particular Raspberry Pi which means we can filter out multiple responses from the same device (if you install a new image at some point in future, for example). It does not allow us to identify anything about you or your Raspberry Pi, but if you are concerned, you can delete the string before submitting the form.

As above, this questionnaire is entirely optional – if you don’t want to fill it in, just close Chromium and re-open it and you won’t see it again – but it would be very helpful for future product development if we can get this information, so we’d really appreciate it if as many people as possible would fill it in.

Other changes

There is also the usual set of bug fixes and small tweaks included in the image, full details of which can be found in the release notes on the download page.

One particular change which it is worth pointing out is that we have made a small change to audio. Raspberry Pi OS uses what is known as ALSA (Advanced Linux Sound Architecture) to control audio devices. Up until now, both the internal audio outputs on Raspberry Pi – the HDMI socket and the headphone jack – have been treated as a single ALSA device, with a Raspberry Pi-specific command used to choose which is active. Going forward, we are treating each output as a separate ALSA device; this makes managing audio from the two HDMI sockets on Raspberry Pi 4 easier and should be more compatible with third-party software. What this means is that after installing the updated image, you may need to use the audio output selector (right-click the volume icon on the taskbar) to re-select your audio output. (There is a known issue with Sonic Pi, which will only use the HDMI output however the selector is set – we’re looking at getting this fixed in a future release.)

Some people have asked how they can switch the audio output from the command line without using the desktop. To do this, you will need to create a file called .asoundrc in your home directory; ALSA looks for this file to determine which audio device it should use by default. If the file does not exist, ALSA uses “card 0” – which is HDMI – as the output device. If you want to set the headphone jack as the default output, create the .asoundrc file with the following contents:

defaults.pcm.card 1
defaults.ctl.card 1

This tells ALSA that “card 1” – the headphone jack – is the default device. To switch back to the HDMI output, either change the ‘1’s in the file to ‘0’s, or just delete the file.

How do I get it?

The new image is available for download from the usual place: our Downloads page.

To update an existing image, use the usual terminal command:

sudo apt update
sudo apt full-upgrade

To just install the bookshelf app:

sudo apt update
sudo apt install rp-bookshelf

To just install the magnifier, either find it under Universal Access in Recommended Software, or:

sudo apt update
sudo apt install mage

You’ll need to add the magnifier plugin to the taskbar after installing the program itself. Once you’ve installed the program and rebooted, right-click the taskbar and choose Add/Remove Panel Items; click Add, and select the Magnifier option.

We hope you like the changes — as ever, all feedback is welcome, so please leave a comment below!

The post Latest Raspberry Pi OS update – May 2020 appeared first on Raspberry Pi.



from Raspberry Pi https://ift.tt/2ZZzYUr

Raspberry Pi-Based IEPE Sensor Measurement HAT DAQ

Ideal for Machine Condition Monitoring and Edge Computing applications Is an open-source solution that allows users to develop applications on Linux Measurement Computing Corporation, a designer and manufacturer of data acquisition devices has announced the release of the MCC 172 Integrated Electronic Piezoelectric (IEPE) Measurement Hardware Attached on Top (HAT) for Raspberry Pi. Ideal for […]

The post Raspberry Pi-Based IEPE Sensor Measurement HAT DAQ appeared first on Electronics For You.



from Electronics For You https://ift.tt/3gxaW4L

20 productivity tools for the Linux terminal

Computer screen with files or windows open

Many of us, admittedly, only use computers because they're fun. But some people use computers to get stuff done, and their theory is computers are supposed to make things faster, better, and more organized. In practice, though, computers don't necessarily improve our lives without a little manual reconfiguration to match our individual work styles.


read more

from Opensource.com https://ift.tt/2Al9CkC

A new way to build cross-platform UIs for Linux ARM devices

Digital images of a computer desktop

Creating a great user experience (UX) for your applications is a tough job, especially if you are developing embedded applications. Today, there are two types of graphical user interface (GUI) tools generally available for developing embedded software: either they involve complex technologies, or they are extremely expensive.


read more

from Opensource.com https://ift.tt/3deCYjh

Add interactivity to your Python plots with Bokeh

Hands on a keyboard with a Python book

In this series of articles, I'm looking at the characteristics of different Python plotting libraries by making the same multi-bar plot in each one. This time I'm focusing on Bokeh (pronounced "BOE-kay").


read more

from Opensource.com https://ift.tt/2B7ZH2h

New NFC Reader IC for Digital Car Keys by STMicroelectronics

Delivers rapid and convenient car-key connectivity over extended distances Compliant with the Digital Key Release 2.0 standard by the Car Connectivity Consortium and certified by NFC Forum STMicroelectronics has announced a new addition to its digital car key portfolio of ST25R near-field communication (NFC) reader ICs, the ST25R3920. The new device introduces enhanced features for better […]

The post New NFC Reader IC for Digital Car Keys by STMicroelectronics appeared first on Electronics For You.



from Electronics For You https://ift.tt/2XEvAY7

TSMC Delivers 7nm Automotive Design Enablement Platform

TSMC announced the availability of the first 7nm Automotive Design Enablement Platform (ADEP), accelerating time-to-design for customers’ AI Inferencing Engines, Advanced Driver-assistance Systems (ADAS) and Autonomous Driving applications.

With its 7nm family of technologies in volume production since 2018, TSMC is able to deliver the leading-edge processes to fulfill high computation needs for automotive applications, and also meet rigorous durability and reliability requirements.

TSMC’s ADEP is certified with the ISO 26262 standard for functional safety, and consists of Standard Cell, GPIO, and SRAM foundation IP based on the Company’s years of experience in 7nm production for design robustness and first-time success. In addition, TSMC’s foundation IP have also passed rigorous qualification according to AEC-Q100 Grade-1, providing customers with another layer of quality assurance.

Process design kits and support from third party vendor IPs are also available, enabling customers to further focus their efforts on the unique capabilities that distinguish their product in the market. Furthermore, TSMC not only provides robust 7nm capacity with automotive-grade defect PPM, it is also committed to supporting the long life cycles of automotive products.

“Automotive applications have always demanded the highest level of quality. With the advent of ADAS and autonomous driving, powerful and efficient computing is now also required to enable AI inferencing engines to perceive the road and understand traffic to help drivers make split-second decisions,” said Dr. Cliff Hou, Senior Vice President of Research & Development and Technology Development at TSMC. “TSMC is uniquely positioned with our 7nm experience and comprehensive design ecosystem to unleash our customers’ innovations and achieve first-time silicon success while meeting the rigorous demands of bringing safer and smarter vehicles to market.”

In addition to a robust automotive IP ecosystem, TSMC Fabs are certified with IATF 16949 for automotive product manufacturing. TSMC also provides an Automotive Service Package for wafer manufacturing, with a built-in “Zero Defect Mindset” for tightened control and enhanced gating to achieve automotive DPPM goals, as well as a Safe Launch Program during production ramp to ensure the success of new product introduction.

The post TSMC Delivers 7nm Automotive Design Enablement Platform appeared first on EE Times Asia.



from EE Times Asia https://ift.tt/2ZNJEB2

SparkFun Qwiic Shield for Thing Plus (DEV-16790)



The SparkFun Qwiic Shield for Thing Plus provides you with a quick and easy way to enter into SparkFun's Qwiic ecosystem with your Thing Plus or Feather boards. Since the Thing Plus and Feather footprints are interchangable, you can use this shield with any Arduino development board that uses the two! This shield connects the I2C bus (GND, 3.3V, SDA, and SCL) on your Thing Plus to four SparkFun Qwiic connectors (two horizontally and two vertically mounted). The Qwiic ecosystem allows for easy daisy chaining so, as long as your devices are on different addresses, you can connect as many Qwiic devices as you'd like.

The Qwiic Shield for Thing Plus comes with one 12-pin and one 16-pin headers. You will need to solder the headers to the shield and, if necessary, to your Thing Plus or Feather board. Take care to match the markings on the Qwiic Shield to the appropriate pins on your Thing to avoid possibly damaging your boards.


The SparkFun Qwiic Connect System is an ecosystem of I2C sensors, actuators, shields and cables that make prototyping faster and less prone to error. All Qwiic-enabled boards use a common 1mm pitch, 4-pin JST connector. This reduces the amount of required PCB space, and polarized connections mean you can’t hook it up wrong.


Includes:

Features:

  • Thing Plus and Feather Footprint Compatible
  • 4x Qwiic Connection Ports
  • I2C Jumper
  • 3.3V and GND Buses

Revision Changes: This revision of the SparkFun Qwiic Shield for Thing Plus, we have only made one change to improve the ease of use of the board, listed below. If users are unsure about which version they purchased, please refer to the product pictures.

  • The SparkFun Qwiic Shield for Thing Plus now includes a set of Feather-Stackable Headers.

Documents:



from New Products at SparkFun https://ift.tt/2AeRJEe

Thursday, May 28, 2020

What Today’s Engineers Need to Know

What does today’s engineer need to know? And how should the media such as EE Times and EDN — cover such topics for their audience?

These are overarching questions that might seem too general. Yet, such musings often expose — in the open and in public — preoccupations in the minds of the industry’s thinking heads.

Daniel Cooley, senior vice president and chief strategy officer at Silicon Labs, did not disappoint us during a recent chat. He went straight to four big industry-wide topics that he believes are changing the face of the tech world:

  1. AI
  2. security
  3. the roles that tech companies play in the real world (will be scrutinized by governments and consumers)
  4. technology stack (what happens in the cloud matters to chip designers)
4 Things Today’s Engineer Must Know

Cooley deems these topics either too poorly framed or not sufficiently explained in the media. Simplistic views on AI are already creating huge misconceptions in the market. Although engineers loathe politics, they must appreciate that the tech companies they work for are facing much more scrutiny from government regulators, just like utility companies. Lobbying in Washington might have meant nothing to most chip engineers ten years ago. Today, US chip companies can’t even sell to Huawei.

Following is a portion of our conversation with Cooley, during which he explained why these four issues are high on his mind and why he thinks engineers should be paying attention.

On AI

Daniel Cooley: First on the technology front. This is a big one. But there are too many misconceptions out there, because of the way we talk about machine learning and artificial intelligence.

[AI is covered in such a way that] it is everything to everybody, and it means nothing. I’m reading it out there and there’s a problem.

I mean, there are actually really good applications for machine learning and AI, [and they are not covered.]

AI is fundamentally a new kind of computing. It’s not ‘this if then else’ kind of stuff — that we’ve been working with for the last 40 years. This is really just fundamentally different.

And I think someone needs to sit down and parse out why it’s different and explain where it is good for that.

We know today the difference between general artificial intelligence, which is Terminator and robots, where most of this [coverage] is going right now, and [something analogous to] drug sniffing dogs.

You know, dogs get trained really well to do something. But you have no idea how it works. I can’t ask my dog, how his brain is working. [But] I don’t have to program his brain anymore. I just kind of train him.

Like understanding the dog’s limitations, not enough people are understanding what the technology is, how it’s being applied, what it’s being used for, what’s it not being used for.

They just say it’s AI. VCs are doing all this stuff, companies are starting, and every company on the planet claims to be an AI expert now even though they’re not.

So, AI is a fundamental technology. What is it? What isn’t it? Where did it start? Where did it come from? Trace it back to Carver Mead in the 1980s with neuromorphic computing. Draw the lineage.

Daniel Cooley

On security

Cooley: The second thing is about security technology. Security technology has been primarily a software and it’s been a higher layer software problem for a very long time.

But it’s now working its way down deep into the technology stack to the foundries. Actually, things are happening at the foundries, chip design houses, and [at] every level of the technology stack.

When it comes to security, I think engineers need to know why it matters.

They need to understand the implications for when it goes wrong and how it’s implemented. How is it being applied? And when it goes wrong, here’s what happens.

I think everybody understands credit card theft and the big stuff that we read about in the news because a hundred million people’s accounts got stolen.

But how does security impact chip design? How does it impact embedded software and, and what is the industry collectively doing about that?

There’s a lot of people out there talking about security, but it’s another one of those very, very noisy topics.

If you can help distill it down to something that’s manageable and meaningful, I think that’s important.

On roles tech companies play

Cooley: This isn’t so much about core technologies, but I think people need to understand a little bit more. It’s about the role that technology companies play in the world.

And this has been changing over time. Twenty years ago, the best tech companies, all they had to do was [to] be good at tech and have the best mousetrap. For example, the best search is what made Google Google. And the best cell phone technology made Qualcomm.

These big, big companies just were the best at what they did. But everything has changed in the way technology has to be connected to governments and to the trust that individuals put in these tech companies. You’re trusting Amazon with a lot when you talk to Alexa.

Technology companies will be scrutinized the way energy companies have been for a very long time. The way other industries such as pharmaceuticals, transportation, manufacturing and even entertainment have been scrutinized, right? [For example], there is a whole division of our government looking after entertainment [so that] we have rating systems on movies and TV. [And if you don’t comply], you know, you’re liable. You can’t just publish anything you want.

So, tech companies are going to have to contend with all this. You saw [what] Microsoft in the ’90s went through. That was just the first of, I think, 50 years of technology [history], fusing its way into every ecosystem.

And I have no idea how you actually take that topic and make sense of it, except that all the engineers out there [need to know what’s] happening in the background.

And whether they like it or not, [there’s an] expectation put on technology companies — from diversity to corporate social governance to investor expectations to how we lobby, through the semiconductor industry association, for example.

We might not have thought we needed the big lobbying thing 10 years ago. But all of a sudden, now we do. We’re in the news because we can’t sell chips to Huawei. And what does that mean? So, you know, general awareness needs to be there about the roles that technology companies play, and how they interact with the world. I know, Junko, you publish on that a lot and I read all your articles there.

On technology stack

Cooley: The fourth thing here that I think is needed is the technology stack — from hardware to cloud software. It is changing a lot.

The old stratification that we’ve thrived on for the last 20 years isn’t going to be the same technology stack in 20 years.

Take, for example, fabless chip design houses. We sell chips to pure-play software companies like Microsoft or Google. But it’s changing. All those companies are doing their own chips now. They’re going down-stack. All the chip companies are moving up-stack like Nvidia or Intel.

So, what becomes important is drawing some transparency, drawing some concrete examples of this… like the role that foundries play. When Globalfoundries stopped seven nanometers, that was a catalyst for a whole lot of changes. That’s still trickling through the system right now.

When we make products at Silicon labs, we have engineers at our company designing the chips, of course. We’re a chip design company and we sell chips and that’s how we make money.

But we are also doing the embedded software on those chips, writing, building products for our supply chain so that we can make those chips securely.

We have to deploy equipment into our supply chain so that we can inject information.

We have engineers writing software for the mobile phones that have to talk to these devices. We have engineers writing software for the cloud to manage the data and software updates.

So, the product itself exists across this entire spectrum now in a way that it just didn’t happen even five years ago.

That trend is going to continue. If you can cover anything on the stratification of the technology stack, it will help a chip designer understand that what happens in the cloud matters to them and their product or vice versa even.

To conclude his thoughts on “AI, Security, Tech companies’ roles in the global economy and technology stack that must be fused across all the layers in the society,” Cooley noted:

Fundamentally, computing, PCs, Internet, and smartphone got us the industry we have today. TSMCs biggest priority every year, I guarantee you, is getting the next Apple socket. Thats whats been driving the biggest foundry, the biggest cell phone company and the biggest chip company thus far. But in 20 years or even 10 years, it’s not going to be that. It’s gotta be something else. So, how do we show people where things are going in the long-term?

The post What Today’s Engineers Need to Know appeared first on EE Times Asia.



from EE Times Asia https://ift.tt/3gxjGYA

How I channel my inner Star Trek character at work

In a recent Twitter thread , I self-identified as "some days Deanna, some days Riker." Others shared their own "Star Trek Sp...