Tech roundup 23: a journal published by a bot

Read a tech roundup with this week’s news that our powerful bot has chosen: blockchain, AI, development, corporates and more.

Gooooooood morning, Network!!! Hey, this is not a test, this is a tech roundup. Time to rock it from the Delta to the DMZ.

AI, bots and robots

Blockchain and decentralization

Woman computer scientist of the week
Ping Fu is a Chinese-American entrepreneur. She is the co-founder of 3D software development company Geomagic, and was its chief executive officer until February 2013 when the company was acquired by 3D Systems Inc. As of March 2014, she is the Vice President and Chief Entrepreneur Officer at 3D Systems. Fu grew up in China during the Cultural Revolution and moved to the United States in 1984. She co-founded Geomagic in 1997 with her then-husband Herbert Edelsbrunner, and has been recognized for her achievements with the company through a number of awards, including being named Inc. magazine’s 2005 “Entrepreneur of the Year”. In 2013, she published her memoir, Bend, Not Break, co-authored with MeiMei Fox.

Cloud and architecture

Development and languages

Quote of the week

I object to doing things that computers can do.

        — Olin Shivers


Other news

Suscríbete al blog por correo electrónico

Introduce tu correo electrónico para suscribirte a este blog y recibir notificaciones de nuevas entradas.

Tech roundup 22: a journal published by a bot

Read a tech roundup with this week’s news that our powerful bot has chosen: blockchain, AI, development, corporates and more.

Gooooooood morning, Folk!!! Hey, this is not a test, this is a tech roundup. Time to rock it from the Delta to the DMZ.

AI, bots and robots

  • Graph Matching Networks for Learning the Similarity of Graph Structured Objects
    This paper addresses the challenging problem of retrieval and matching of
    graph structured objects, and makes two key contributions. First, we
    demonstrate how Graph Neural Networks (GNN), which have emerged as an effective
    model for various supervised prediction problems defined on structured data,
    can be trained to produce embedding of graphs in vector spaces that enables
    efficient similarity reasoning. Second, we propose a novel Graph Matching
    Network model that, given a pair of graphs as input, computes a similarity
    score between them by jointly reasoning on the pair through a new cross-graph
    attention-based matching mechanism. We demonstrate the effectiveness of our
    models on different domains including the challenging problem of
    control-flow-graph based function similarity search that plays an important
    role in the detection of vulnerabilities in software systems. The experimental
    analysis demonstrates that our models are not only able to exploit structure in
    the context of similarity learning but they can also outperform domain-specific
    baseline systems that have been carefully hand-engineered for these problems.
  • Learning new skills in InfoSec without getting overwhelmed
  • Learning to Represent Edits
  • Autonomous robotic intracardiac catheter navigation using haptic vision
  • Botanical Sexism Cultivates Home-Grown Allergies
  • BattleBots Made by 5th to 8th Graders in Robotics Club
  • Who to Sue When a Robot Loses Your Fortune
  • Tertill Weeding Robot
  • Alexa has been eavesdropping this whole time
    Would you let a stranger eavesdrop in your home and keep the recordings? For most people, the answer is, “Are you crazy?”
    Yet that’s essentially what Amazon has been doing to millions of us with its assistant Alexa in microphone-equipped Echo speakers. And it’s hardly alone: Bugging our homes is Silicon Valley’s next frontier.
    Many smart-speaker owners don’t realize it, but Amazon keeps a copy of everything Alexa records after it hears its name. Apple’s Siri, and until recently Google’s Assistant, by default also keep recordings to help train their artificial intelligences.
    So come with me on an unwelcome walk down memory lane. I listened to four years of my Alexa archive and found thousands of fragments of my life: spaghetti-timer requests, joking houseguests and random snippets of “Downton Abbey.” There were even sensitive conversations that somehow triggered Alexa’s “wake word” to start recording, including my family discussing medication and a friend conducting a business deal.
    You can listen to your own Alexa archive here. Let me know what you unearth.
    For as much as we fret about snooping apps on our computers and phones, our homes are where the rubber really hits the road for privacy. It’s easy to rationalize away concerns by thinking a single smart speaker or appliance couldn’t know enough to matter. But across the increasingly connected home, there’s a brazen data grab going on, and there are few regulations, watchdogs or common-sense practices to keep it in check.
    Let’s not repeat the mistakes of Facebook in our smart homes. Any personal data that’s collected can and will be used against us. An obvious place to begin: Alexa, stop recording us.
    – – –
    “Eavesdropping” is a sensitive word for Amazon, which has battled lots of consumer confusion about when, how…
  • Smarter Training of Neural Networks
  • Robotics startup Anki is shutting down
  • Human Pose Estimation with Deep Learning
  • Build a Neural Network from Scratch
    Build a basic Feedforward Neural Network with backpropagation in Python
  • Listen to TurboTax Lie to Get Out of Refunding Overcharged Customers
  • TensorFlow Graphics: Computer Graphics Meets Deep Learning
    Posted by Julien Valentin and Sofien Bouaziz
  • Diffeq.jl v6.4: Full GPU ODEs, Neural ODEs with Batching on GPUs, and More
    This is a huge release. We should take the time to thank every contributor
    to the JuliaDiffEq package ecosystem. A lot of this release focuses on performance
    features. The ability to use stiff ODE solvers on the GPU, with automated
    tooling for matrix-free Newton-Krylov, faster broadcast, better Jacobian
    re-use algorithms, memory use reduction, etc. All of these combined give some
    pretty massive performance boosts in the area of medium to large sized highly
    stiff ODE systems. In addition, numerous robustness fixes have enhanced the
    usability of these tools, along with a few new features like an implementation
    of extrapolation for ODEs and the release of ModelingToolkit.jl.

    Let’s start by summing up this release with an example.

    Comprehensive Example

    Here’s a nice showcase of DifferentialEquations.jl: Neural ODE with batching on
    the GPU (without internal data transfers) with high order adaptive implicit ODE
    solvers for stiff equations using matrix-free Newton-Krylov via preconditioned
    GMRES and trained using checkpointed adjoint equations. Few programs work
    directly with neural networks and allow for batching, few utilize GPUs, few
    have methods applicable to highly stiff equations, few allow for large stiff
    equations via matrix-free Newton-Krylov, and finally few have checkpointed
    adjoints. This is all done in a high level programming language. What does the
    code for this look like?

    using OrdinaryDiffEq, Flux, DiffEqFlux, DiffEqOperators, CuArrays
    x = Float32[2.; 0.]|>gpu
    tspan = Float32.((0.0f0,25.0f0))
    dudt = Chain(Dense(2,50,tanh),Dense(50,2))|>gpu
    p = DiffEqFlux.destructure(dudt)
    dudt_(du,u::TrackedArray,p,t) = u .= DiffEqFlux.restructure(dudt,p)(u)
    dudt_(du,u::AbstractArray,p,t) = u .=,p)(u))
    ff = ODEFunction(dudt_,jac_prototype = JacVecOperator(dudt_,x))
    prob = ODEProblem(ff,x,tspan,p)

    That is 10 lines of code, and we can continue to make it even more succinct.

    Now, onto the release highlights.

    Full GPU Support in ODE Solvers

    Now not just the non-stiff ODE solvers but the stiff ODE solvers allow for
    the initial condition to be a GPUArray, with the internal methods not
    performing any indexing in order to allow for all computations to take place
    on the GPU without data transfers. This allows for expensive right-hand side
    calculations, like those in neural ODEs or PDE discretizations, to utilize
    GPU acceleration without worrying about whether the cost of data
    transfers will overtake the solver speed enhancements.

    While the presence of broadcast throughout the solvers might worry one about

    Fast DiffEq-Specific Broadcast

    Yingbo Ma (@YingboMa) implemented a fancy broadcast wrapper that allows for
    all sorts of information to be passed to the compiler in the differential
    equation solver’s internals, making a bunch of no-aliasing and sizing assumptions
    that are normally not possible. These change the internals to all use a
    special @.. which turns out to be faster than standard loops, and this is the
    magic that really enabled the GPU support to happen without performance
    regressions (and in fact, we got some speedups from this, close to 2x in some

    Smart linsolve defaults and LinSolveGMRES

    One of the biggest performance-based features to be released is smarter linsolve
    defaults. If you are using dense arrays with a standard Julia build, OpenBLAS
    does not perform recursive LU factorizations which we found to be suboptimal
    by about 5x in some cases. Thus our default linear solver now automatically
    detects the BLAS installation and utilizes RecursiveFactorizations.jl to give
    this speedup for many standard stiff ODE cases. In addition, if you passed a
    sparse Jacobian for the jac_prototype, the linear solver now automatically
    switches to a form that works for sparse Jacobians. If you use an
    AbstractDiffEqOperator, the default linear solver automatically switches to
    a Krylov subspace method (GMRES) and utilizes the matrix-free operator directly.
    Banded matrices and Jacobians on the GPU are now automatically handled as well.

    Of course, that’s just the defaults, and most of this was possible before but
    now has just been made more accessible. In addition to these, the ability to
    easily switch to GMRES was added via LinSolveGMRES. Just add
    linsolve = LinSolveGMRES() to any native Julia algorithm with a swappable
    linear solver and it’ll switch to using GMRES. In this you can pass options
    for preconditioners and tolerances as well. We will continue to integrate this
    better into our integrators as doing so will enhance the efficiency when
    solving large sparse systems.

    Automated J*v Products via Autodifferentiation

    When using GMRES, one does not need to construct the full Jacobian matrix.
    Instead, one can simply use the directional derivatives in the direction of
    v in order to compute J*v. This has now been put into an operator form
    via JacVecOperator(dudt_,x), so now users can directly ask for this to
    occur using one line. It allows for the use of autodifferentiation or
    numerical differentiation to calculate the J*v.


    One of the nichest but nicest new features is DEStats. If you do sol.destats
    then you will see a load of information on how many steps were taken, how many
    f calls were done, etc. giving a broad overview of the performance of the
    algorithm. Thanks to Kanav Gupta (@kanav99) and Yingbo Ma (@YingboMa) for really
    driving this feature since it has allowed for a lot of these optimizations to
    be more thoroughly investigated. You can expect DiffEq development to
    accelerate with this information!

    Improved Jacobian Reuse

    One of the things which was noticed using DEStats was that the amount of Jacobians
    and inversions that were being calculated could be severly reduced. Yingbo Ma (@YingboMa)
    did just that, greatly increasing the performance of all implicit methods like
    KenCarp4 showing cases in the 1000+ range where OrdinaryDiffEq’s native
    methods outperformed Sundials CVODE_BDF. This still has plenty of room for

    DiffEqBiological performance improvements for large networks (speed and sparsity)

    Samuel Isaacson (@isaacson) has been instrumental in improving DiffEqBiological.jl
    and its ability to handle large reaction networks. It can now parse the networks
    much faster and can build Jacobians which utilize sparse matrices. It pairs
    with his ParseRxns(???) library and has been a major source of large stiff
    test problems!

    Partial Neural ODEs, Batching and GPU Fixes

    We now have working examples of partial neural differential equations, which
    are equations which have pre-specified portions that are known while others
    are learnable neural networks. These also allow for batched data and GPU
    acceleration. Not much else to say except let your neural diffeqs go wild!

    Low Memory RK Optimality and Alias_u0

    Kanav Gupta (@kanav99) and Hendrik Ranocha (@ranocha) did amazing jobs at doing memory optimizations of
    low-memory Runge-Kutta methods for hyperbolic or advection-dominated PDEs.
    Essentially these methods have a minimal number of registers which are
    theoretically required for the method. Kanav added some tricks to the implementation
    (using a fun = -> += overload idea) and Henrick added the alias_u0 argument
    to allow for using the passed in initial condition as one of the registers. Unit
    tests confirm that our implementations achieve the minimum possible number of
    registers, allowing for large PDE discretizations to make use of
    DifferentialEquations.jl without loss of memory efficiency. We hope to see
    this in use in some large-scale simulation software!

    More Robust Callbacks

    Our ContinuousCallback implementation now has increased robustness in double
    event detection, using a new strategy. Try to break it.

    GBS Extrapolation

    New contributor Konstantin Althaus (@AlthausKonstantin) implemented midpoint
    extrapolation methods for ODEs using Barycentric formulas and different a
    daptivity behaviors. We will be investigating these methods for their
    parallelizability via multithreading in the context of stiff and non-stiff ODEs.

    ModelingToolkit.jl Release

    ModelingToolkit.jl has now gotten some form of a stable release. A lot of credit
    goes to Harrison Grodin (@HarrisonGrodin). While it has
    already been out there and found quite a bit of use, it has really picked up
    steam over the last year as a modeling framework suitable for the flexibility
    DifferentialEquations.jl. We hope to continue its development and add features
    like event handling to its IR.

    SUNDIALS J*v interface, stats, and preconditioners

    While we are phasing out Sundials from our standard DifferentialEquations.jl
    practice, the Sundials.jl continues to improve as we add more features to
    benchmark against. Sundials’ J*v interface has now been exposed, so adding a
    DiffEqOperator to the jac_prototype will work with Sundials. DEStats is
    hooked up to Sundials, and now you can pass preconditioners to its internal
    Newton-Krylov methods.

    Next Directions

    Improved nonlinear solvers for stiff SDE handling
    More adaptive methods for SDEs
    Better boundary condition handling in DiffEqOperators.jl
    More native implicit ODE (DAE) solvers
    Adaptivity in the MIRK BVP solvers
    LSODA integrator interface
    Improved BDF

Blockchain and decentralization

Woman computer scientist of the week
Jeanne Ferrante is a computer scientist active in the field of compiler technology, where she has made important contributions regarding optimization and parallelization. Jeanne Ferrante is Professor of Computer Science and Engineering at University of California, San Diego. She received her B.A. from New College at Hofstra University in 1969, and her Ph.D. from Massachusetts Institute of Technology in 1974. Prior to joining UC San Diego in 1994, she taught at Tufts University from 1974 until 1978, where she worked on computational complexity problems such as the theory of rational order and first order theory of real addition. In 1978, she worked as a research staff at the IBM T.J. Watson Research Center until 1994.

Cloud and architecture

Development and languages

Quote of the week

Simplicity is the ultimate sophistication.

        — Leonardo da Vinci


Other news

Suscríbete al blog por correo electrónico

Introduce tu correo electrónico para suscribirte a este blog y recibir notificaciones de nuevas entradas.

Tech roundup 21: a journal published by a bot

Read a tech roundup with this week’s news that our powerful bot has chosen: blockchain, AI, development, corporates and more.

Gooooooood morning, Inhabitants!!! Hey, this is not a test, this is a tech roundup. Time to rock it from the Delta to the DMZ.

AI, bots and robots

Blockchain and decentralization

  • Tether Says Stablecoin Is Only Backed 74% by Cash, Securities
  • Distributions vs. Releases: Why Python Packaging Is Hard
  • IPFS-Deploy – Zero-Config CLI to Deploy Static Websites to IPFS
  • Ethernet MDIO / MMD Design for FPGA Open Source Network Processor
  • Redesigning Trust: Blockchain for Supply Chains
    The ChallengeBlockchain has the potential to revolutionize sectors and ecosystems in which trust is needed among parties with misaligned interests. It is precisely within these contexts, however, that deploying such a new and complex technology can be the most difficult. Providing increased efficiency, transparency and interoperability across supply chains has been one of the most fertile areas for blockchain experimentation, illustrating both the opportunities and challenges in realizing the transformative potential of this technology. Many of these experiments have focused on ports as the intersection of diverse and vital supply chains. In most cases, projects have come about as the result of the efforts of one or two parties focused primarily on their own interests, without taking into consideration unintended consequences or downstream effects on other parties or on the system as a whole. The result is a fractured system that leaves behind parts of the sector while capturing economic efficiency gains for certain actors. In fact, the hyper-focus on efficiency gains can reinforce existing mistrust or competition and undermine or even block the transformation that blockchain technology has the potential to bring about.The OpportunityThis project will convene a broad, multi-stakeholder community to co-design governance frameworks to accelerate the most impactful uses of blockchain in port systems in a manner that is strategic, forward-thinking, and globally interoperable; and by which countries across the economic spectrum will be able to benefit. Since systemwide blockchain deployment will likely be accompanied by significant disruptions across industries, the deployment of this technology requires careful consideration of unintended consequences, as well as measures to ensure that narrow, un-scalable, or bilaterally-designed solutions do not dominate the marketplace. The frameworks developed will ensure that diverse stakeholders can utilize the unique qualities of blockchain to create trust in an environment that is prone to mistrust. They will be prototyped and piloted with relevant stakeholders, iterated based on learnings, and then disseminated broadly for international adoption. The frameworks can be applied to create a systematic global approach to the deployment of blockchain that allows for variability, but is not tied to a specific port system, and that helps to ensure that the needs of all players in the ecosystem are considered as the system transforms.
Woman computer scientist of the week
Nalini Venkatasubramanian is a Professor of Computer Science in the Donald Bren School of Information and Computer Sciences at the University of California, Irvine. She is known for her work in effective management and utilization of resources in the evolving global information infrastructure. Her research interests are Multimedia Computing, Networked and Distributed Systems, Internet technologies and Applications, Ubiquitous Computing and Urban Crisis Responses. Dr. Venkatasubramanian’s research focuses on enabling effective management and utilization of resources in the evolving global information infrastructure. She also addresses the problem of composing resource management services in distributed systems.

Cloud and architecture

Development and languages

Quote of the week

A notation is important for what it leaves out.

        — Joseph Stoy


  • Intel Stockpiling 10nm Chips
  • Alaskan halibut provides a glimpse of Amazon’s strategy with Whole Foods
  • Google Staffers Share Stories of ‘Systemic’ Retaliation
  • Anti-vaxxer leaflet found inserted in book sold by Amazon
  • Tesla Model 3 vs. BMW M3
    It’s the super-saloon fight we’ve all been waiting for: Tesla Model 3 Performance vs BMW M3, electric vs petrol. We head to Thunderhill Raceway in Northern California to apply some Top Gear science.
  • WeWork Files for IPO
    The company initially filed paperwork with the Securities and Exchange Commission in December, according to a memo to employees.
  • Alphabet Announces First Quarter 2019 Results
  • Google Advertising Revenue Growth Slows, Triggering Share Slump
  • Google Shows First Cracks in Years
    Google’s once-untouchable online-advertising operation took a body blow, hurt by mounting competition and struggles within its increasingly high-profile YouTube unit.
  • Profitable Giants Like Amazon Pay $0 in Corp Taxes. Some Voters Are Sick of It
    In Ohio, where companies like FirstEnergy and Goodyear pay no federal corporate taxes, Democrats haven’t figured out how to leverage anxiety over income inequality to defeat President Trump.
  • Google has added “unsupported browser” warnings for Edge Chromium on Google Docs
  • Teen Suicide Spiked After Debut Of Netflix’s ’13 Reasons Why,’ Study Says
  • Microsoft Build Accelerator – open-source build engine for large systems
  • Eric Schmidt Steps Down from Alphabet’s Board of Directors
    “After 18 years of board mtgs, I’m following coach Bill Campbell’s legacy & helping the next generation of talent to serve. Thanks to Larry, Sergey & all my BOD colleagues! Onward for me as Technical Advisor to coach Alphabet and Google businesses/tech, plus…..”
  • Amazon S3 Batch Operations
  • ‘Math Doesn’t Lie’: Musk Can’t Dodge Tesla Cash Woes Any Longer
  • CallJoy – A cloud-based phone agent for small businesses
    Every day, local small businesses receive 400 million calls from consumers. CallJoy’s phone technology helps them answer with intelligence.
  • The Uber IPO Is a Moral Stain on Silicon Valley
  • Epic Games Is Acquiring Rocket League Developer Psyonix
  • Supreme Court seeks Trump administration views on Google-Oracle copyright feud
  • Google employees are staging a sit-in to protest reported retaliation
  • Small retailers who sold through Amazon are facing a tax time bomb
  • Tesla is raising up to $1.5B through convertible note and share sale
    Tesla is raising up to $1.55 billion through the sale of notes and shares, according to a filing made by the EV maker today. The document outlines that Tesla will sell up to $1.35 billion in convertible senior notes. The number could increase further: Tesla is giving underwriters the chance to buy …
  • Stripe’s fifth engineering hub is Remote
    Stripe has engineering hubs in San Francisco, Seattle, Dublin, and Singapore. We are establishing a fifth hub that is less traditional but no less important: Remote. We are doing this to situate product development closer to our customers, improve our ability to tap the 99.74% of talented engineers living outside the metro areas of our first four hubs, and further our mission of increasing the GDP of the internet.

    Stripe will hire over a hundred remote engineers this year. They will be deployed across every major engineering workstream at Stripe.

    ## Our users are everywhere. We have to be, too.

    Our remotes keep us close to our customers, which is key to building great products. They are deeply embedded in the rhythms of their cities. They see how people purchase food differently in bodegas, konbini, and darshinis. They know why it is important to engineer robustness in the face of slow, unreliable internet connections. They have worked in and run businesses that don’t have access to global payments infrastructure.

    Stripe has had hundreds of extremely high-impact remote employees since inception. Historically, they’ve reported into teams based in one of our hubs. We had a strong preference for managers to be located in-office and for teams to be office-centric, to maximize face-to-face bandwidth when doing creative work.

    As we have grown as a company, we have learned some things.

    One is that the technological substrate of collaboration has gotten *shockingly* good over the last decade. Most engineering work at Stripe happens in conversations between engineers, quiet thinking, and turning those thoughts into artifacts. Of these, thinking is the only one that doesn’t primarily happen online.

    There was a time when writing on a whiteboard had substantially higher bandwidth than a Word doc over email. Thankfully Google Docs, Slack, git, Zoom, and the like deliver high-bandwidth synchronous collaboration on creative work. The experience of using them is so remarkably good that we only notice it when something is broken. Since you write code via pull requests and not whiteboards, your reviewer needs to have access to the same PR; having access to the same whiteboard is strictly optional.

    While we did not initially plan to make hiring remotes a huge part of our engineering efforts, our remote employees have outperformed all expectations. Foundational elements of the Stripe technology stack, our products, our business, and our culture were contributed by remotes. We would be a greatly diminished company without them.

    ## Stripe’s new remote engineering hub

    We have seen such promising results from our remote engineers that we are greatly increasing our investment in remote engineering.

    **We are formalizing our Remote engineering hub.** It is coequal with our physical hubs, and will benefit from some of our experience in
    [scaling engineering
    For example, there will be dedicated engineering teams in the Remote hub that exist in no other hub. (Some individuals report to a team located in a different hub, and we expect this will remain common, but the bulk of high-bandwidth coworker relationships are within-hub.) We also have a remote engineering lead, analogous to the site leads we have for our physical hubs.

    **We are expanding the scope we will hire for remotely**. In addition to hiring engineers, we plan to begin hiring remote product managers, engineering managers, and technical program managers later this year. (We will continue hiring remote employees in non-engineering positions across the company as well.)

    **We intend to expand our remote engineering hiring aggressively.** We will hire at least a hundred remote engineers this year. We expect to be constrained primarily by our capacity to onboard and support new remote engineers, and we will work to increase that capacity.

    **We will continue to improve the experience of being a remote.** We have carefully tracked the experience of our remote employees, including in our twice-annual employee survey. Most recently, 73% of engineers at Stripe believe we do a good job of integrating remote employees.

    Great user experiences are made in the tiny details. We care about the details to a degree that is borderline obsessive. A recent example: we wrote code to attach a videoconferencing link to every calendar invitation by default, so that remotes never feel awkward having to ask for one.

    ## More to come

    There are still some constraints on our ambitions. In our first phase, we will be focused primarily on remote engineers in North America, starting with the US and Canada. While we are confident that great work is possible within close time zones, we don’t yet have structures to give remotes a reliably good experience working across large time zone differences. And though we intend to hire remote engineers in Europe and Asia eventually, our hubs in Dublin and Singapore are not sufficiently established to support remotes just yet.

    Most engineers working at Stripe are full-time employees, with a full benefits suite. There is substantial organizational, legal, and financial infrastructure required to support each new jurisdiction we hire in, so we have to be measured in how quickly we expand. We can support most US states today, and plan to expand our hiring capabilities to include jurisdictions covering more than 90% of the US population as quickly as possible. We intend, over the longer term, to be everywhere our customers are.

    We will continue encouraging governments worldwide to lower barriers to hiring. Our customers, from startups to international conglomerates, all feel the pain of this. We think making it easier for companies to hire would produce a step-function increase in global GDP.

    ## We want to talk to you

    We would love to talk about our Remote hub or [remote positions at
    Stripe]( Our
    CEO and co-founder, Patrick Collison, and I will host a remote coffee on May 22, 2019;
    [sign up to be invited]( to it. We are also, and always, available on the internet.

  • Microsoft, currently the most valuable company, is having a Nadellaissance
  • Google Will Soon Let You Automatically Scrub Your Location and Web History
    It’s a plus for privacy.
  • Tesla Model 3 Effect – Chevy Dealers Discount 2019 Bolt by Almost $10k
  • BBC admits iPlayer has lost streaming fight with Netflix
  • The making of Amazon Prime, the internet’s most devastating membership program
  • DeepSwarm – Optimising CNNs Using Swarm Intelligence

Other news

Suscríbete al blog por correo electrónico

Introduce tu correo electrónico para suscribirte a este blog y recibir notificaciones de nuevas entradas.